Deeplinks
EFF's Deeplinks Blog: Noteworthy news from around the internet
This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to public comments to the National Science Foundation (NSF). EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no secret and untested algorithms should decide who keeps their job or who is denied safe haven in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes. Government Automated Decision Making US procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions. These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in policing and national security, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to make decisions about their continued employment. Automating important decisions about people is reckless and dangerous. At best these new AI tools are ineffective nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology. Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful. Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a more fair and scientific development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies. Don’t Enable Powerful Gatekeepers Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is exactly what we warned against back in 2023. For example, bills like NO FAKES and NO AI Fraud expand copyright laws to favor corporate giants over everyone else’s expression. NO FAKES even includes a scheme for a DMCA-like notice takedown process, long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like watermarking that are riddled with practical points of failure. Among these dubious solutions is the growing prominence of AI licensing schemes which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more lunch money to give their bully. This is the wrong approach. Looking for easy solutions like expanding copyright, hurts everyone. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies. People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms. You can read our comments here.
EFF’s most important platform for welcoming everyone to join us in our fight for a better digital future is our website, eff.org. We thank Fastly for their generous in-kind contribution of services helping keep EFF’s website online. Eff.org was first registered in 1990, just three months after the organization was founded, and long before the web was an essential part of daily life. Our website and the fight for digital rights grew rapidly alongside each other. However, along with rising threats to our freedoms online, threats to our site have also grown. It takes a village to keep eff.org online in 2025. Every day our staff work tirelessly to protect the site from everything from DDoS attacks to automated hacking attempts, and everything in between. As AI has taken off, so have crawlers and bots that scrape content to train LLMs, sometimes without respecting rate limits we’ve asked them to observe. Newly donated security add-ons from Fastly help us automate DDoS prevention and rate limiting, preventing our servers from getting overloaded when misbehaving visitors abuse our sites. Fastly also caches the content from our site around the globe, meaning that visitors from all over the world can access eff.org and our other sites quickly and easily. EFF is member-supported by people who share our vision for a better digital future. We thank Fastly for showing their support for our mission to ensure that technology supports freedom, justice, and innovation for all people of the world with an in-kind gift of their full suite of services.
Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech. EFFecting Change Livestream Series: Is There Hope for Social Media? Thursday, March 20th 12:00 PM - 1:00 PM Pacific - Check Local Time This event is LIVE and FREE! Users are frustrated with legacy social media companies. Is it possible to effectively build the kinds of communities we want online while avoiding the pitfalls that have driven people away? Join our panel featuring EFF Civil Liberties Director David Greene, EFF Director for International Freedom of Expression Jillian York, Mastodon's Felix Hlatky, Bluesky's Emily Liu, and Spill's Kenya Parham as they explore the future of free expression online and why social media might still be worth saving. We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page. Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.
In January, Meta made targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. More specifically, Meta’s hateful conduct policy now contains the following text: People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech. The revision of this policy timed to Trump’s second election demonstrates that the company is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging LGBTQ+ rights. For example, the revised policy removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics, such as sexual identity. In response, LGBTQ+ rights organization AllOut gathered social justice groups and civil society organizations, including EFF, to demand that Meta immediately reverse the policy changes. By normalizing such speech, Meta risks increasing hate and discrimination against LGBTQ+ people on Facebook, Instagram and Threads. The campaign is supported by the following partners: All Out, Global Project Against Hate and Extremism (GPAHE), Electronic Frontier Foundation (EFF), EDRi - European Digital Rights, Bits of Freedom, SUPERRR Lab, Danes je nov dan, Corporación Caribe Afirmativo, Fundación Polari, Asociación Red Nacional de Consejeros, Consejeras y Consejeres de Paz LGBTIQ+, La Junta Marica, Asociación por las Infancias Transgénero, Coletivo LGBTQIAPN+ Somar, Coletivo Viveração, and ADT - Associação da Diversidade Tabuleirense, Casa Marielle Franco Brasil, Articulação Brasileira de Gays - ARTGAY, Centro de Defesa dos Direitos da Criança e do Adolescente Padre, Marcos Passerini-CDMP, Agência Ambiental Pick-upau, Núcleo Ypykuéra, Kurytiba Metropole, ITTC - Instituto Terra, Trabalho e Cidadania. Sign the AllOut petition (external link) and tell Meta: Stop hate speech against LGBT+ people! If Meta truly values freedom of expression, we urge it to redirect its focus to empowering some of its most marginalized speakers, rather than empowering only their detractors and oppressive voices.
EFF is deeply saddened to learn of the passing of Mark Klein, a bona fide hero who risked civil liability and criminal prosecution to help expose a massive spying program that violated the rights of millions of Americans. Mark didn’t set out to change the world. For 22 years, he was a telecommunications technician for AT&T, most of that in San Francisco. But he always had a strong sense of right and wrong and a commitment to privacy. Mark not only saw how it works, he had the documents to prove it. When the New York Times reported in late 2005 that the NSA was engaging in spying inside the U.S., Mark realized that he had witnessed how it was happening. He also realized that the President was not telling Americans the truth about the program. And, though newly retired, he knew that he had to do something. He showed up at EFF’s front door in early 2006 with a simple question: “Do you folks care about privacy?” We did. And what Mark told us changed everything. Through his work, Mark had learned that the National Security Agency (NSA) had installed a secret, secure room at AT&T’s central office in San Francisco, called Room 641A. Mark was assigned to connect circuits carrying Internet data to optical “splitters” that sat just outside of the secret NSA room but were hardwired into it. Those splitters—as well as similar ones in cities around the U.S.—made a copy of all data going through those circuits and delivered it into the secret room. Mark not only saw how it works, he had the documents to prove it. He brought us over a hundred pages of authenticated AT&T schematic diagrams and tables. Mark also shared this information with major media outlets, numerous Congressional staffers, and at least two senators personally. One, Senator Chris Dodd, took the floor of the Senate to acknowledge Mark as the great American hero he was. We used Mark’s evidence to bring two lawsuits against the NSA spying that he uncovered. The first was Hepting v. AT&T and the second was Jewel v. NSA. Mark also came with us to Washington D.C. to push for an end to the spying and demand accountability for it happening in secret for so many years. He wrote an account of his experience called Wiring Up the Big Brother Machine . . . And Fighting It. Mark stood up and told the truth at great personal risk to himself and his family. AT&T threatened to sue him, although it wisely decided not to do so. While we were able to use his evidence to make some change, both EFF and Mark were ultimately let down by Congress and the Courts, which have refused to take the steps necessary to end the mass spying even after Edward Snowden provided even more evidence of it in 2013. But Mark certainly inspired all of us at EFF, and he helped inspire and inform hundreds of thousands of ordinary Americans to demand an end to illegal mass surveillance. While we have not yet seen the success in ending the spying that we all have hoped for, his bravery helped to usher numerous reforms so far. And the fight is not over. The law, called 702, that now authorizes the continued surveillance that Mark first revealed, expires in early 2026. EFF and others will continue to push for continued reforms and, ultimately, for the illegal spying to end entirely. Mark’s legacy lives on in our continuing fights to reform surveillance and honor the Fourth Amendment’s promise of protecting personal privacy. We are forever grateful to him for having the courage to stand up and will do our best to honor that legacy by continuing the fight.
As a legal organization that has fought in court to defend the rights of technology users for almost 35 years, including numerous legal challenges to federal government overreach, Electronic Frontier Foundation unequivocally supports Perkins Coie’s challenge to the Trump administration’s shocking, vindictive, and unconstitutional Executive Order. In punishing the law firm for its zealous advocacy on behalf of its clients, the order offends the First Amendment, the rule of law, and the legal profession broadly in numerous ways. We commend Perkins Coie (and its legal representatives) for fighting back. Lawsuits against the federal government are a vital component of the system of checks and balances that undergirds American democracy. They reflect a confidence in both the judiciary to decide such matters fairly and justly, and the executive to abide by the court’s determination. They are a backstop against autocracy and a sustaining feature of American jurisprudence since Marbury v. Madison, 5 U.S. 137 (1803). The Executive Order, if enforced, would upend that system and set an appalling precedent: Law firms that represent clients adverse to a given administration can and will be punished for doing their jobs. This is a fundamental abuse of executive power. The constitutional problems are legion, but here are a few: The First Amendment bars the government from “distorting the legal system by altering the traditional role of attorneys” by controlling what legal arguments lawyers can make. See Legal Services Corp. v. Velasquez, 531 U.S. 533, 544 (2001). “An informed independent judiciary presumes an informed, independent bar.” Id. at 545. The Executive Order is also unconstitutional retaliation for Perkins Coie’s engaging in constitutionally protected speech during the course of representing its clients. See Nieves v. Bartlett, 587 U.S. 391, 398 (2019). And the Executive Order functions as an illegal loyalty oath for the entire legal profession, conditioning access to federal courthouses or client relationships with government contractors on fealty to the executive branch, including forswearing protected speech in opposition to it. That condition is blatantly unlawful: The government cannot require that those it works with or hires embrace certain political beliefs or promise that they have “not engaged, or will not engage, in protected speech activities such as … criticizing institutions of government.” See Cole v. Richardson, 405 U.S. 676, 680 (1972). Civil liberties advocates such as EFF rely on the rule of law and access to the courts to vindicate their clients’, and the public’s, fundamental rights. From this vantage point, we can see that this Executive Order is nothing less than an attack on the foundational principles of American democracy. The Executive Order must be swiftly nullified by the court and uniformly vilified by the entire legal profession. Click here for the number to listen in on a hearing on a temporary restraining order, scheduled for 2pmET/11amPT Wednesday, March 12.
The Anchorage Police Department (APD) has concluded its three-month trial of Axon’s Draft One, an AI system that uses audio from body-worn cameras to write narrative police reports for officers—and has decided not to retain the technology. Axon touts this technology as “force multiplying,” claiming it cuts in half the amount of time officers usually spend writing reports—but APD disagrees. The APD deputy chief told Alaska Public Media, “We were hoping that it would be providing significant time savings for our officers, but we did not find that to be the case.” The deputy chief flagged that the time it took officers to review reports cut into the time savings from generating the report. The software translates the audio into narrative, and officers are expected to read through the report carefully to edit it, add details, and verify it for authenticity. Moreover, because the technology relies on audio from body-worn cameras, it often misses visual components of the story that the officer then has to add themselves. “So if they saw something but didn’t say it, of course, the body cam isn’t going to know that,” the deputy chief continued. The Anchorage Police Department is not alone in claiming that Draft One is not a time saving device for officers. A new study into police using AI to write police reports, which specifically tested Axon’s Draft One, found that AI-assisted report-writing offered no real time-savings advantage. This news comes on the heels of policymakers and prosecutors casting doubt on the utility or accuracy of AI-created police reports. In Utah, a pending state bill seeks to make it mandatory for departments to disclose when reports have been written by AI. In King County, Washington, the Prosecuting Attorney’s Office has directed officers not to use any AI tools to write narrative reports. In an era where companies that sell technology to police departments profit handsomely and have marketing teams to match, it can seem like there is an endless stream of press releases and local news stories about police acquiring some new and supposedly revolutionary piece of tech. But what we don’t usually get to see is how many times departments decide that technology is costly, flawed, or lacks utility. As the future of AI-generated police reports rightly remains hotly contested, it’s important to pierce the veil of corporate propaganda and see when and if police departments actually find these costly bits of tech useless or impractical.
In a bold push for medical privacy, Hawaii's House of Representatives has introduced HCR 144/HR 138, a resolution calling for the Hawaii Attorney General to investigate whether crisis pregnancy centers (CPCs) are violating patient privacy laws. Often referred to as "fake clinics" or “unregulated pregnancy centers” (UPCs), these are non-medical centers that provide free pregnancy tests and counseling, but typically do not offer essential reproductive care like abortion or contraception. In Hawaii, these centers outnumber actual clinics offering abortion and reproductive healthcare. In fact, the first CPC in the United States was opened in Hawaii in 1967 by Robert Pearson, who then founded the Pearson Foundation, a St. Louis-based organization to assist local groups in setting up unregulated crisis pregnancy centers. EFF has called on state AGs to investigate CPCs across the country. In particular, we are concerned that many centers have misrepresented their privacy practices, including suggesting that patient information is protected by HIPAA when it may not be. In January, EFF contacted attorneys general in Florida, Texas, Arkansas, and Missouri asking them to identify and hold accountable CPCs that engage in deceptive practices. Rep. Kapela’s resolution specifically references EFF’s call on state Attorneys General. It reads: “WHEREAS, the Electronic Frontiers Foundation, an international digital rights nonprofit that promotes internet civil liberties, has called on states to investigate whether crisis pregnancy centers are complying with patient privacy regulations with regard to the retention and use of collected patient data.” HCR 144/HR 138 underscores the need to ensure that healthcare providers handle personal data, particularly medical data, securely and transparently.. Along with EFF’s letters to state AGs, the resolution refers to the increasing body of research on the topic, such as: A 2024 Healthcare Management Associates Study showed that CPCs received $400 million in federal funding between 2017 and 2023, with little oversight from regulators. A Health Affairs article from November 2024 titled "Addressing the HIPAA Blind Spot for Crisis Pregnancy Centers" noted that crisis pregnancy centers often invoke the Health Insurance Portability and Accountability Act (HIPAA) to collect personal information from clients. Regardless of one's stance on reproductive healthcare, there is one principle that should be universally accepted: the right to privacy. As HCR 144/HR 138 moves forward, it is imperative that Hawaii's Attorney General investigate whether CPCs are complying with privacy regulations and take action, if necessary, to protect the privacy rights of individuals seeking reproductive healthcare in Hawaii. Without comprehensive privacy laws that offer individuals a private right of action, state authorities must be the front line in safeguarding the privacy of their constituents. As we continue to advocate for stronger privacy protections nationwide, we encourage lawmakers and advocates in other states to follow Hawaii's lead and take action to protect the medical privacy rights of all of their constituents.
A look back at the games governments played to avoid transparency In the year 2015, we witnessed the launch of OpenAI, a debate over the color of a dress going viral, and a Supreme Court decision that same-sex couples have the right to get married. It was also the year that the Electronic Frontier Foundation (EFF) first published The Foilies, an annual report that hands out tongue-in-cheek "awards" to government agencies and officials that respond outrageously when a member of the public tries to access public records through the Freedom of Information Act (FOIA) or similar laws. A lot has changed over the last decade, but one thing that hasn't is the steady flow of attempts by authorities to avoid their legal and ethical obligations to be open and accountable. Sometimes, these cases are intentional, but just as often, they are due to incompetence or straight-up half-assedness. Over the years, EFF has teamed up with MuckRock to document and ridicule these FOIA fails and transparency trip-ups. And through a partnership with AAN Publishers, we have named-and-shamed the culprits in weekly newspapers and on indie news sites across the United States in celebration of Sunshine Week, an annual event raising awareness of the role access to public records plays in a democracy. This year, we reflect on the most absurd and frustrating winners from the last 10 years as we prepare for the next decade, which may even be more terrible for government transparency. The Most Infuriating FOIA Fee: U.S. Department of Defense (2016 Winner) Assessing huge fee estimates is one way agencies discourage FOIA requesters. Under FOIA, federal agencies are able to charge "reasonable" fees for producing copies of records. But sometimes agencies fabricate enormous price tags to pressure the requester to drop the query. In 2015, Martin Peck asked the U.S. Department of Defense (DOD) to disclose the number of "HotPlug” devices (tools used to preserve data on seized computers) it had purchased. The DOD said it would cost $660 million and 15 million labor hours (over 1,712 years), because its document system wasn't searchable by keyword, and staff would have to comb through 30 million contracts by hand. Runners-up: City of Seattle (2019 Winner): City officials quoted a member of the public $33 million for metadata for every email sent in 2017, but ultimately reduced the fee to $40. Rochester (Michigan) Community Schools District (2023 Winner): A group of parents critical of the district's remote-learning plan requested records to see if the district was spying on their social media. One parent was told they would have to cough up $18,641,345 for the records, because the district would have to sift through every email. Willacy County (Texas) Sheriff's Office (2016 Winner): When the Houston Chronicle asked for crime data, the sheriff sent them an itemized invoice that included $98.40 worth of Wite-Out–the equivalent of 55 bottles–to redact 1,016 pages of records. The Most Ridiculous Redaction: Federal Bureau of Investigation (2015 Winner) Ain't no party like a REDACTED FBI party! Brad Heath, who in 2014 was a reporter at USA Today, got a tip that a shady figure had possibly attended an FBI retirement party. So he filed a request for the guest list and pictures taken at the event. In response, the FBI sent a series of surreal photos of the attendees, hugging, toasting, and posing awkwardly, but all with polygonal redactions covering their faces like some sort of mutant, Minecraft family reunion. Runner-Up U.S. Southern Command (2023 Winner): Investigative journalist Jason Leopold obtained scans of paintings by detainees at Guantanamo Bay, which were heavily redacted under the claim that the art would disclose law enforcement information that could "reasonably be expected to risk circumvention of the law." The Most Reprehensible Reprisal Against a Requester: White Castle, Louisiana (2017 Winner) WBRZ Reporter Chris Nakamoto was cuffed for trying to obtain records in White Castle, Louisiana. Credit: WBRZ-TV Chris Nakamoto, at the time a reporter for WBRZ, filed a public records request to probe the White Castle mayor's salary. But when he went down to check on some of the missing records, he was handcuffed, placed in a holding cell, and charged with the crime of "remaining after being forbidden.” He was summoned to appear before the "Mayor's Court" in a judicial proceeding presided over by none other than the same mayor he was investigating. The charges were dropped two months later. Runners-up Jack White (2015 Winner): One of the rare non-government Foilies winners, the White Stripes guitarist verbally abused University of Oklahoma student journalists and announced he wouldn't play at the school anymore. The reason? The student newspaper, OU Daily, obtained and published White's contract for a campus performance, which included his no-longer-secret guacamole recipe, a bowl of which was demanded in his rider. Richlands, Virginia (2024 Winner): Resident Laura Mollo used public records laws to investigate problems with the 911 system and, in response, experienced intense harassment from the city and its contractors, including the police pulling her over and the city appointing a special prosecutor to investigate her. On separate occasions, Morro even says she found her mailbox filled with spaghetti and manure. Worst Federal Agency of the Decade: Federal Bureau of Investigation Bashing the FBI has come back into vogue among certain partisan circles in recent years, but we've been slamming the feds long before it was trendy. The agency received eight Foilies over the last decade, more than any other entity, but the FBI's hostility towards FOIA goes back much further. In 2021, the Cato Institute uncovered records showing that, since at least 1989, the FBI had been spying on the National Security Archive, a non-profit watchdog that keeps an eye on the intelligence community. The FBI’s methods included both physical and electronic surveillance, and the records show the FBI specifically cited the organization's "tenacity" in using FOIA. Cato's Patrick G. Eddington reported it took 11 months for the FBI to produce those records, but that's actually relatively fast for the agency. We highlighted a 2009 FOIA request that the FBI took 12 years to fulfil: Bruce Alpert of the Times-Picayune had asked for records regarding the corruption case of U.S. Rep. William Jefferson, but by the time he received the 84 pages in 2021, the reporter had retired. Similarly, when George Washington University professor and documentary filmmaker Nina Seavey asked the FBI for records related to surveillance of antiwar and civil rights activists, the FBI told her it would take 17 years to provide the documents. hen the agency launched an online system for accepting FOIA requests, it somehow made the process even more difficult. The FBI was at its worst when it was attempting to use non-disclosure agreements to keep local law enforcement agencies from responding to public records requests regarding the use of cell phone surveillance technologies called cell-site simulators, or "stingrays." The agency even went so far as to threaten agencies that release technical information to media organizations with up to 20 years in prison and a $1 million fine, claiming it would be a violation of the Arms Export Control Act. But you don't have to take our word for it: Even Micky Dolenz of The Monkees had to sue the FBI to get records on how agents collected intelligence on the 1960s band. Worst Local Jurisdiction of the Decade: Chicago, Illinois Some agencies, like the city of Chicago, treat FOIA requests like a plague. Over the last decade, The Foilies have called out officials at all levels of government and in every part of the country (and even in several other countries), but time and time again, one city keeps demonstrating special antagonism to the idea of freedom of information: the Windy City. In fact, the most ridiculous justification for ignoring transparency obligations we ever encountered was proudly championed by now-former Mayor Lori Lightfoot during the COVID-19 lockdown in April 2020. She offered a bogus choice to Chicagoans: the city could either process public records requests or provide pandemic response, falsely claiming that answering these requests would pull epidemiologists off the job. According to the Chicago Tribune, she implied that responding to FOIA requests would result in people having to "bury another grandmother." She even invoked the story of Passover, claiming that the "angel of death is right here in our midst every single day" as a reason to suspend FOIA deadlines. If we drill down on Chicago, there's one particular department that seems to take particular pleasure in screwing the public: the Chicago Police Department (CPD). In 2021, CPD was nominated so many times (for withholding records of search warrants, a list of names of police officers, and body-worn camera footage from a botched raid) that we just threw up our hands and named them "The Hardest Department to FOIA" of the year. In one particularly nasty case, CPD had mistakenly raided the home of an innocent woman and handcuffed her while she was naked and did not allow her to dress. Later, the woman filed a FOIA request for the body-worn camera footage and had to sue to get it. But CPD didn't leave it there: the city's lawyers tried to block a TV station from airing the video and then sought sanctions against the woman's attorney. If you thought these were some doozies, check out The Foilies 2025 (to be published on March 16) to read the beginning of a new decade's worth of FOIA horror stories.
Good old-fashioned grassroots advocacy is one of the best tools we have right now for making a positive change for our civil liberties online. When we unite toward a shared goal, anything is possible, and the right to repair movement is a prime example of this. In July of last year, EFF and many other organizations celebrated Repair Independence Day to commemorate both California and Minnesota enacting strong right to repair laws. And, very recently, it was reported that all 50 states have introduced right to repair legislation. Now, not every state has passed laws yet, but this signals an important milestone for the movement—we want to fix the stuff we own! And this movement has had an impact beyond specific right to repair legislation. In a similar vein, just a few months ago, the U.S. Copyright Office ruled that users can legally repair commercial food preparation equipment without breaking copyright law. Device manufacturers themselves are also starting to feel the pressure and are creating repair-friendly programs. Years of hard work have made it possible for us to celebrate the right-to-repair movement time and time again. It's a group effort—folks like iFixit, who provide repair guides and repairability scores; the Repair Association, who’ve helped lead the movement in state legislatures; and of course, people like you who contact local representatives, are the reason this movement has gained so much momentum. Fix Copyright! Also available in kids' sizes. But there's still work that can be done. If you’re itching to fix your devices, you can read up on what your state’s repair laws mean for you. You can educate your friends, family, and colleagues when they’re frustrated at how expensive device repair is. And, of course, you can show your support for the right to repair movement with EFF’s latest member t-shirt. We live in a very tumultuous time, so it’s important to celebrate the victories, and it’s equally important to remember that your voice and support can bring about positive change that you want to see.
On Monday, March 10, EFF sent a letter to the Senate Judiciary Committee opposing the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment Act (STOP CSAM Act) ahead of a committee hearing on the bill. EFF opposed the original and amended versions of this bill in the previous Congress, and we are concerned to see the Committee moving to consider the same flawed ideas in the current Congress. At its core, STOP CSAM endangers encrypted messages – jeopardizing the privacy, security, and free speech of every American and fundamentally altering our online communications. In the digital world, end-to-end encryption is our best chance to maintain both individual and national security. Particularly in the wake of the major breach of telecom systems in October 2024 from Salt Typhoon, a sophisticated Chinese-government backed hacking group, legislators should focus on bolstering encryption, not weakening it. In fact, in response to this breach, a top U.S. cybersecurity chief said “encryption is your friend.” Given its significant problems and potential vast impact on internet users, we urge the Committee to reject this bill.
Last month saw digital rights organizations and social justice groups head to Taiwan for this year's RightsCon conference on human rights in the digital age. During the conference, one prominent message was spoken loud and clear: Alaa Abd El-Fattah must be immediately released from illegal detention in Egypt. "As Alaa’s mother, I thank you for your solidarity and ask you to not to give up until Alaa is out of prison." During the RightsCon opening ceremony, Access Now’s Executive Director, Alejandro Mayoral Baños, affirmed the urgency of Alaa’s situation in detention and called for Alaa’s freedom. The RightsCon community was also addressed by Alaa’s mother, mathematician Laila Soueif, who has been on hunger strike in London for 158 days. In a video highlighting Alaa’s work with digital rights and his role in this community, she stated: “As Alaa’s mother, I thank you for your solidarity and ask you to not to give up until Alaa is out of prison.” Laila was admitted to hospital the next day with dangerously low blood sugar, blood pressure and sodium levels. RightsCon participants gather in solidarity with the #FreeAlaa campaign The calls to #FreeAlaa and save Laila were again reaffirmed during the closing ceremony in a keynote by Sara Alsherif, Migrant Digital Justice Programme Manager at Open Rights Group and close friend of Alaa. Referencing Alaa’s early work as a digital activist, Alsherif said: “He understood that the fight for digital rights is at the core of the struggle for human rights and democracy.” She closed by reminding the hundreds-strong audience that “Alaa could be any one of us … Please do for him what you would want us to do for you if you were in his position.” During RightsCon, with Laila still in hospital, calls for UK Prime Minister Starmer to get on the phone with Egyptian President Sisi reached a fever pitch, and on 28 February, one day after the closing ceremony, the UK government issued a press release affirming that Alaa’s case had been discussed, with Starmer pressing for Alaa’s freedom. Alaa should have been released on September 29, after serving a five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to recognise Alaa’s British citizenship. Laila Soueif has been on hunger strike for more than five months while she and the rest of his family have worked in concert with various advocacy groups to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and has since been joined by numerous MPs and public figures. Laila still remains in hospital, but following Starmer’s call with Sisi agreed to take glucose, she stated that she is ready to end her hunger strike if progress is made. Laila Soueif and family meeting with UK Prime Minister Keir Starmer As of March 6, Laila has moved to a partial hunger strike of 300 calories per day citing “hope that Alaa’s case might move.” However, the family has learned that Alaa himself began a hunger strike on March 1 in prison after hearing that his mother had been hospitalized. Laila has said that without fast movement on Alaa’s case she will return to a total hunger strike. Alaa’s sister Sanaa, who was previously jailed by the regime on bogus charges, visited Alaa on March 8. If you’re based in the UK, we encourage you to write to your MP to urgently advocate for Alaa’s release (external link): https://freealaa.net/message-mp Supporters everywhere can share Alaa’s plight and Laila’s story on social media using the hashtags #FreeAlaa and #SaveLaila. Additionally, the campaign’s website (external link) offers additional actions, including purchasing Alaa’s book, and participating in a one-day solidarity hunger strike. You can also sign up for campaign updates by e-mail. Every second counts, and time is running out. Keir Starmer and the British government must do everything it can to ensure Alaa’s immediate and unconditional release.
I’m old enough to remember when age verification bills were pitched as a way to ‘save the kids from porn’ and shield them from other vague dangers lurking in the digital world (like…“the transgender”). We have long cautioned about the dangers of these laws, and pointed out why they are likely to fail. While they may be well-intentioned, the growing proliferation of age verification schemes poses serious risks to all of our digital freedoms. Fast forward a few years, and these laws have morphed into something else entirely—unfortunately, something we expected. What started as a misguided attempt to protect minors from "explicit" content online has spiraled into a tangled mess of privacy-invasive surveillance schemes affecting skincare products, dating apps, and even diet pills, threatening everyone’s right to privacy. Age Verification Laws: A Backdoor to Surveillance Age verification laws do far more than ‘protect children online’—they require the creation of a system that collects vast amounts of personal information from everyone. Instead of making the internet safer for children, these laws force all users—regardless of age—to verify their identity just to access basic content or products. This isn't a mistake; it's a deliberate strategy. As one sponsor of age verification bills in Alabama admitted, "I knew the tough nut to crack that social media would be, so I said, ‘Take first one bite at it through pornography, and the next session, once that got passed, then go and work on the social media issue.’” In other words, they recognized that targeting porn would be an easier way to introduce these age verification systems, knowing it would be more emotionally charged and easier to pass. This is just the beginning of a broader surveillance system disguised as a safety measure. This alarming trend is already clear, with the growing creep of age verification bills filed in the first month of the 2025-2026 state legislative session. Consider these three bills: Skincare: AB-728 in California Age verification just hit the skincare aisle! California’s AB-728 mandates age verification for anyone purchasing skin care products or cosmetics that contain certain chemicals like Vitamin A or alpha hydroxy acids. On the surface, this may seem harmless—who doesn't want to ensure that minors are safe from harmful chemicals? But the real issue lies in the invasive surveillance it mandates. A person simply trying to buy face cream could be forced to submit sensitive personal data through “an age verification system,” creating a system of constant tracking and data collection for a product that should be innocuous. Dating Apps: A3323 in New York Match made in heaven? Not without your government-issued ID. New York’s A3323 bill mandates that online dating services verify users’ age, identity, and location before allowing access to their platforms. The bill's sweeping requirements introduce serious privacy concerns for all users. By forcing users to provide sensitive personal information—such as government-issued IDs and location data—the bill creates significant risks that this data could be misused, sold, or exposed through data breaches. Dieting products: SB 5622 in Washington State Shed your privacy before you shed those pounds! Washington State’s SB 5622 takes aim at diet pills and dietary supplements by restricting their sale to anyone under 18. While the bill’s intention is to protect young people from potentially harmful dieting products, it misses the mark by overlooking the massive privacy risks associated with the age verification process for everyone else. To enforce this restriction, the bill requires intrusive personal data collection for purchasing diet pills in person or online, opening the door for sensitive information to be exploited. The Problem with Age Verification: No Solution Is Safe Let’s be clear: no method of age verification is both privacy-protective and entirely accurate. The methods also don’t fall on a neat spectrum of “more safe” to “less safe.” Instead, every form of age verification is better described as “dangerous in one way” or “dangerous in a different way.” These systems are inherently flawed, and none come without trade-offs. Additionally, they continue to burden adults who just want to browse the internet or buy everyday items without being subjected to mass data collection. For example, when an age verification system requires users to submit government-issued identification or a scan of their face, it collects a staggering amount of sensitive, often immutable, biometric or other personal data—jeopardizing internet users’ privacy and security. Systems that rely on credit card information, phone numbers, or other third-party material similarly amass troves of personal data. This data is just as susceptible to being misused as any other data, creating vulnerabilities for identity theft and data breaches. These issues are not just theoretical: age verification companies can be—and already have been—hacked. These are real, ongoing concerns for anyone who values their privacy. We must push back against age verification bills that create surveillance systems and undermine our civil liberties, and we must be clear-eyed about the dangers posed by these expanding age verification laws. While the intent to protect children makes sense, the unintended consequence is a massive erosion of privacy, security, and free expression online for everyone. Rather than focusing on restrictive age verification systems, lawmakers should explore better, less invasive ways to protect everyone online—methods that don’t place the entire burden of risk on individuals or threaten their fundamental rights. EFF will continue to advocate for digital privacy, security, and free expression. We urge legislators to prioritize solutions that uphold these essential values, ensuring that the internet remains a space for learning, connecting, and creating—without the constant threat of surveillance or censorship. Whether you’re buying a face cream, swiping on a dating app, or browsing for a bottle of diet pills, age verification laws undermine that vision, and we must do better.
We recently learned that users of the Albion Online gaming forum have received direct messages purporting to be from us. That message, which leverages the fear of an account ban, is a phishing attempt. If you’re an Albion Online forum user and receive a message that claims to be from “the EFF team,” don’t click the link, and be sure to use the in-forum reporting tool to report the message and the user who sent it to the moderators. A screenshot of the message shared by a user of the forums. The message itself has some of the usual hallmarks of a phishing attempt, including tactics like creating a sense of fear that your account may be suspended, leveraging the name of a reputable group, and further raising your heart rate with claims that the message needs a quick response. The goal appears to be to get users to download a PDF file designed to deliver malware. That PDF even uses our branding and typefaces (mostly) correctly. A full walk through of this malware and what it does was discovered by the Hunt team. The PDF is a trojan, or malware disguised as a non malicious file or program, that has an embedded script that calls out to an attacker server. The attacker server then sends a “stage 2” payload that installs itself onto the user’s device. The attack structure used was discovered to be the Pyramid C2 framework. In this case, it is a Windows operating system intended malware. There’s a variety of actions it takes, like writing and modifying files to the victim’s physical drive. But the most worrisome discovery is that it appears to connect the user’s device to a malicious botnet and has potential access to the “VaultSvc” service. This service securely stores user credentials, such as usernames and passwords File-based IoCs: act-7wbq8j3peso0qc1.pages[.]dev/819768.pdf Hash: 4674dec0a36530544d79aa9815f2ce6545781466ac21ae3563e77755307e0020 This incident is a good reminder that often, the best ways to avoid malware and phishing attempts are the same: avoid clicking strange links in unsolicited emails, keep your computer’s software updated, and always scrutinize messages claiming to come from computer support or fraud detection. If a message seems suspect, try to verify its authenticity through other channels—in this case, poking around on the forum and asking other users before clicking on anything. If you ever absolutely must open a file, do so in an online document reader, like Google Drive, or try sending the link through a tool like VirusTotal, but try to avoid opening suspicious files whenever possible. For more information to help protect yourself, check out our guides for protecting yourself from malware and avoiding phishing attacks.
We've opposed the Take It Down Act because it could be easily manipulated to take down lawful content that powerful people simply don't like. Last night, President Trump demonstrated he has a similar view on the bill. He wants to sign the bill into law, then use it to remove content about — him. And he won't be the only powerful person to do so. Here’s what Trump said to a joint session of Congress: The Senate just passed the Take It Down Act…. Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody. %3Ciframe%20src%3D%22https%3A%2F%2Farchive.org%2Fembed%2Ftrump-take-it-down-act%22%20webkitallowfullscreen%3D%22true%22%20mozallowfullscreen%3D%22true%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22384%22%20frameborder%3D%220%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from archive.org The Take It Down Act is an overbroad, poorly drafted bill that would create a powerful system to pressure removal of internet posts, with essentially no safeguards. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. There are no penalties for applying very broad, or even farcical definitions of what constitutes NCII, and then demanding that it be removed. take action TELL CONGRESS: "Take It Down" Has No real Safeguards This Bill Will Punish Critics, and The President Wants It Passed Right Now Congress should believe Trump when he says he would use the Take It Down Act simply because he's "treated badly," despite the fact that this is not the intention of the bill. There is nothing in the law, as written, to stop anyone—especially those with significant resources—from misusing the notice-and-takedown system to remove speech that criticizes them or that they disagree with. Trump has frequently targeted platforms carrying content and speakers of entirely legal speech that is critical of him, both as an elected official and as a private citizen. He has filed frivolous lawsuits against media defendants which threaten to silence critics and draw scarce resources away from important reporting work. Now that Trump issued a call to action for the bill in his remarks, there is a possibility that House Republicans will fast track the bill into a spending package as soon as next week. Non-consensual intimate imagery is a serious problem that deserves serious consideration, not a hastily drafted, overbroad bill that sweeps in legal, protected speech. How The Take It Down Act Could Silence People A few weeks ago, a "deepfake" video of President Trump and Elon Musk was displayed across various monitors in the Housing and Urban Development office. The video was subsequently shared on various platforms. While most people wouldn't consider this video, which displayed faked footage of Trump kissing Elon Musk's feet, "nonconsensual intimate imagery," the takedown provision of the bill applies to an “identifiable individual” engaged in “sexually explicit conduct.” This definition leaves much room for interpretation, and nudity or graphic displays are not necessarily required. Moreover, there are no penalties whatsoever to dissuade a requester from simply insisting that content is NCII. Apps and websites only have 48 hours to remove content once they receive a request, which means they won’t be able to verify claims. Especially if the requester is an elected official with the power to start an investigation or prosecution, what website would stand up to such a request? The House Must Not Pass This Dangerous Bill Congress should focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime that is bound to be abused. Take It Down would likely lead to the use of often-inaccurate automated filters that are infamous for flagging legal content, from fair-use commentary to news reporting. It will threaten encrypted services, which may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces. Protecting victims of NCII is a legitimate goal. But good intentions alone are not enough to make good policy. Tell your Member of Congress to oppose censorship and to oppose H.R.633. take action Tell the house to stop "Take it down"
At EFF we spend a lot of time thinking about Street Level Surveillance technologies—the technologies used by police and other authorities to spy on you while you are going about your everyday life—such as automated license plate readers, facial recognition, surveillance camera networks, and cell-site simulators (CSS). Rayhunter is a new open source tool we’ve created that runs off an affordable mobile hotspot that we hope empowers everyone, regardless of technical skill, to help search out CSS around the world. CSS (also known as Stingrays or IMSI catchers) are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower. CSS operate by conducting a general search of all cell phones within the device’s radius. Law enforcement use CSS to pinpoint the location of phones often with greater accuracy than other techniques such as cell site location information (CSLI) and without needing to involve the phone company at all. CSS can also log International Mobile Subscriber Identifiers (IMSI numbers) unique to each SIM card, or hardware serial numbers (IMEIs) of all of the mobile devices within a given area. Some CSS may have advanced features allowing law enforcement to intercept communications in some circumstances. What makes CSS especially interesting, as compared to other street level surveillance, is that so little is known about how commercial CSS work. We don’t fully know what capabilities they have or what exploits in the phone network they take advantage of to ensnare and spy on our phones, though we have some ideas. We also know very little about how cell-site simulators are deployed in the US and around the world. There is no strong evidence either way about whether CSS are commonly being used in the US to spy on First Amendment protected activities such as protests, communication between journalists and sources, or religious gatherings. There is some evidence—much of it circumstantial—that CSS have been used in the US to spy on protests. There is also evidence that CSS are used somewhat extensively by US law enforcement, spyware operators, and scammers. We know even less about how CSS are being used in other countries, though it's a safe bet that in other countries CSS are also used by law enforcement. Much of these gaps in our knowledge are due to a lack of solid, empirical evidence about the function and usage of these devices. Police departments are resistant to releasing logs of their use, even when they are kept. The companies that manufacture CSS are unwilling to divulge details of how they work. Until now, to detect the presence of CSS, researchers and users have had to either rely on Android apps on rooted phones, or sophisticated and expensive software-defined radio rigs. Previous solutions have also focused on attacks on the legacy 2G cellular network, which is almost entirely shut down in the U.S. Seeking to learn from and improve on previous techniques for CSS detection we have developed a better, cheaper alternative that works natively on the modern 4G network. Introducing Rayhunter To fill these gaps in our knowledge, we have created an open source project called Rayhunter.1 It is developed to run on an Orbic mobile hotspot (Amazon, Ebay) which is available for $20 or less at the time of this writing. We have tried to make Rayhunter as easy as possible to install and use, regardless of your level of technical knowledge. We hope that activists, journalists, and others will run these devices all over the world and help us collect data about the usage and capabilities of cell-site simulators (please see our legal disclaimer.) Rayhunter works by intercepting, storing, and analyzing the control traffic (but not user traffic, such as web requests) between the mobile hotspot Rayhunter runs on and the cell tower to which it’s connected. Rayhunter analyzes the traffic in real-time and looks for suspicious events, which could include unusual requests like the base station (cell tower) trying to downgrade your connection to 2G which is vulnerable to further attacks, or the base station requesting your IMSI under suspicious circumstances. Rayhunter notifies the user when something suspicious happens and makes it easy to access those logs for further review, allowing users to take appropriate action to protect themselves, such as turning off their phone and advising other people in the area to do the same. The user can also download the logs (in PCAP format) to send to an expert for further review. The default Rayhunter user interface is very simple: a green (or blue in colorblind mode) line at the top of the screen lets the user know that Rayhunter is running and nothing suspicious has occurred. If that line turns red, it means that Rayhunter has logged a suspicious event. When that happens the user can connect to the device's WiFi access point and check a web interface to find out more information or download the logs. Rayhunter in action Installing Rayhunter is relatively simple. After buying the necessary hardware, you’ll need to download the latest release package, unzip the file, plug the device into your computer, and then run an install script for either Mac or Linux (we do not support Windows as an installation platform at this time.) We have a few different goals with this project. An overarching goal is to determine conclusively if CSS are used to surveil free expression such as protests or religious gatherings, and if so, how often it’s occurring. We’d like to collect empirical data (through network traffic captures, i.e. PCAPs) about what exploits CSS are actually using in the wild so the community of cellular security researchers can build better defenses. We also hope to get a clearer picture of the extent of CSS usage outside of the U.S., especially in countries that do not have legally enshrined free speech protections. Once we have gathered this data, we hope we can help folks more accurately engage in threat modeling about the risks of cell-site simulators, and avoid the fear, uncertainty, and doubt that comes from a lack of knowledge. We hope that any data we do find will be useful to those who are fighting through legal process or legislative policy to rein in CSS use where they live. If you’re interested in running Rayhunter for yourself, pick up an Orbic hotspot (Amazon, Ebay), install Rayhunter, and help us collect data about how IMSI catchers operate! Together we can find out how cell site simulators are being used, and protect ourselves and our communities from this form of surveillance — Legal disclaimer: Use Rayhunter at your own risk. We believe running this program does not currently violate any laws or regulations in the United States. However, we are not responsible for civil or criminal liability resulting from the use of this software. If you are located outside of the US please consult with an attorney in your country to help you assess the legal risks of running this program 1. A note on the name: Rayhunter is named such because Stingray is a brand name for cell-site simulators which has become a common term for the technology. One of the only natural predators of the stingray in the wild is the orca, some of which hunt stingrays for pleasure using a technique called wavehunting. Because we like Orcas, we don’t like stingray technology (though the animals are great!), and because it was the only name not already trademarked, we chose Rayhunter.
The U.S. Court of Appeals for the Ninth Circuit correctly held that Grindr, a popular dating app, can’t be held responsible for matching users and enabling them to exchange messages that led to real-world harm. EFF and the Woodhull Freedom Foundation filed an amicus brief in the Ninth Circuit in support of Grindr. Grindr and other dating apps are possible thanks to strong Section 230 immunity. Without this protection, dating apps—and other platforms that host user-generated content—would have more incentive to censor people online. While real-world harms do happen when people connect online, these can be directly redressed by holding perpetrators who did the harm accountable. The case, Doe v. Grindr, was brought by a plaintiff who was 15 years old when he signed up for Grindr but claimed to be over 18 years old to use the app. He was matched with other users and exchanged messages with them. This led to four in-person meetings that resulted in three out of four adult men being prosecuted for rape. The plaintiff brought various state law claims against Grindr centering around the idea that the app was defectively designed, enabling him to be matched with and to communicate with the adults. The plaintiff also brought a federal civil sex trafficking claim. Grindr invoked Section 230, the federal statute that has ensured a free and open internet for nearly 30 years. Section 230(c)(1) specifically provides that online services are generally not responsible for “publishing” harmful user-generated content. Section 230 protects users’ online speech by protecting the intermediaries we all rely on to communicate via dating apps, social media, blogs, email, and other internet platforms. The Ninth Circuit rightly affirmed the district court’s dismissal of all of the plaintiff’s claims. The court held that Section 230 bars nearly all of plaintiff’s claims (except the sex trafficking claim, which is exempted from Section 230). The court stated: Each of Doe’s state law claims necessarily implicates Grindr’s role as a publisher of third-party content. The theory underpinning Doe’s claims for defective design, defective manufacturing, and negligence faults Grindr for facilitating communication among users for illegal activity…. The Ninth Circuit’s holding is important because many plaintiffs have tried in recent years to plead around Section 230 by framing their cases as seeking to hold internet platforms responsible for their own “defective designs,” rather than third-party content. Yet, a closer look at a plaintiff’s allegations often reveals that the plaintiff’s harm is indeed premised on third-party content—that’s true in this case, where the plaintiff exchanged messages with the adult men. As we argued in our brief: Plaintiff’s claim here is based not on mere access to the app, but on the actions of a third party once John Doe logged in—messages exchanged between a third party and Doe, and ultimately, on unlawful acts occurring between them because of those communications. Additionally, courts generally have concluded that an internet platform’s features that relate to how users can engage with the app and how third-party content is displayed and organized, are also “publishing” activities protected by Section 230. As for the federal civil sex trafficking claim, the Ninth Circuit held that the plaintiff’s allegations failed to meet the statutory requirements. The court stated: Doe must plausibly allege that Grindr ‘knowingly’ sex trafficked a person by a list of specified means. But the [complaint] merely shows that Grindr provided a platform that facilitated sharing of messages between users. While the facts of this case are no doubt difficult, the Ninth Circuit reached the correct conclusion. Our modern communications are mediated by private companies, and any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate, as companies would be incentivized to engage in greater censorship of users to mitigate their legal exposure. This does not leave victims without redress—they may seek to hold perpetrators responsible directly. Importantly in this case, three of the perpetrators were criminally charged. And should facts show that an online service participated in criminal conduct, Section 230 would not block a federal prosecution. The court’s ruling demonstrates that Section 230 is working as Congress intended.
Join EFF's Cindy Cohn and Eva Galperin in conversation with Ron Deibert of the University of Toronto’s Citizen Lab, to discuss Ron’s latest book: Chasing Shadows: Cyber Espionage, Subversion and the Global Fight for Democracy. Chasing Shadows provides a front-row seat to a dark underworld of digital espionage, dark PR, and subversion. The book provides a gripping account of how the Citizen Lab, the world’s foremost digital watchdog, has uncovered dozens of cyber espionage cases and protects people in countries around the world. Called “essential reading” by Margaret Atwood, it’s a chilling reminder of the invisible invasions happening on smartphones and computers around the world. LEARN MORE When: Monday, March 10, 2025 7:oo pm - 9:o0 pm (PT) Where: City Lights Bookstore 261 Columbus Avenue San Francisco, CA 94133 About the Author: Ronald J. Deibert is the founder and director of the Citizen Lab, a world-renowned digital security research center at the University of Toronto. The bestselling author of Reset: Reclaiming the Internet for Civil Society and Black Code: Surveillance, Privacy, and the Dark Side of the Internet, he has also written many landmark articles and reports on espionage operations that infiltrated government and NGO computer networks. His team’s exposés of the spyware that attacks journalists and anti-corruption advocates around the world have been featured in The New York Times, The Washington Post, Financial Times, and other media. Deibert has received multiple honors for his cutting-edge work, and in 2022 he was appointed an Officer of the Order of Canada—the country’s second-highest honor of merit.
EFF asked the California Supreme Court not to weaken the Stored Communications Act, a 1986 federal law that restricts how providers can disclose the content of your communications to the government or private parties. The law is built on the principle that you have a reasonable expectation of privacy that providers like Snap and Meta will not disclose your communications to third parties, even though the providers have access to those communications as they are stored on their systems. In an amicus brief, we urged the court to uphold these privacy protections, as they have for the past 40 years. EFF filed the brief along with the Center for Democracy & Technology and the Mozilla Corporation. A lower court decision got it wrong. And we are urging the California Supreme Court to overrule that decision. If the lower court's ruling is affirmed, Meta, Snap, and other providers would be permitted to voluntarily disclose the content of their users' communications to any other corporation, the government, or any individual for any reason. We previously helped successfully urge the California Supreme Court to hear this case.
EFF is here to keep you up-to-date with the latest news in the world of civil liberties and human rights online with our EFFector newsletter! This edition of the newsletter covers Apple's recent decision to turn off Advanced Data Protection for users in the U.K., our how-to guide for limiting Meta's ability to collect and monetize your personal data, and our recent victory against the government's use of Section 702 to spy on Americans. You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below: LISTEN ON YouTube EFFECTOR 37.2 - Fresh Threats to Privacy Around the Globe Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Flock Safety loves to crow about the thousands of local law enforcement agencies around the United States that have adopted its avian-themed automated license plate readers (ALPRs). But when a privacy activist launched a website to map out the exact locations of these pole-mounted devices, the company tried to clip his wings. The company sent DeFlock.me and its creator Will Freeman a cease-and-desist letter, claiming that the project dilutes its trademark. Suffice it to say, and to lean into ornithological wordplay, the letter is birdcage liner. Representing Freeman, EFF sent Flock Safety a letter rejecting the demand, pointing out that the grassroots project is well within its First Amendment rights. Flock Safety’s car-tracking cameras have been spreading across the United States like an invasive species, preying on public safety fears and gobbling up massive amounts of sensitive driver data. The technology not only tracks vehicles by their license plates, but also creates “fingerprints” of each vehicle, including the make, model, color and other distinguishing features. This is a mass surveillance technology that collects information on everyone, regardless of whether they are connected to a crime. It has been misused by police to spy on their ex-partners and could be used to target people engaged in First Amendment activities or seeking medical care. Through crowdsourcing and open-source research, DeFlock.me aims to “shine a light on the widespread use of ALPR technology, raise awareness about the threats it poses to personal privacy and civil liberties, and empower the public to take action.” While EFF’s Atlas of Surveillance project has identified more than 1,700 agencies using ALPRs, DeFlock has mapped out more than 16,000 individual camera locations, more than a third of which are Flock Safety devices. Flock Safety is so integrated into law enforcement, it’s not uncommon to see law enforcement agencies actually promoting the company by name on their websites. The Sussex County Sheriff’s website in Virginia has only two items in its menu bar: Accident Reports and Flock Safety. The name “DeFlock,” EFF told the vendor, represents the project’s goal of “ending ALPR usage and Flock’s status as one of the most widely used ALPR providers.” It’s accurate, appropriate, effective, and most importantly, legally protected. We wrote: Your claims of dilution by blurring and/or tarnishment fail at the threshold, without even needing to address why dilution is unlikely. Federal anti-dilution law includes express carve-outs for any noncommercial use of a mark and for any use in connection with criticizing or commenting on the mark owner or its products. Mr. Freeman’s use of the name “DeFlock” is both. Flock Safety’s cease and desist later is just the latest in a long list of groups turning to bogus intellectual property claims to silence their critics. Frequently, these have no legal basis and are designed to frighten under-resourced activists and advocacy groups with high-powered law firm letterheads. EFF is here to stand up against these trademark bullies, and in the case of Flock Safety, flip them the bird.
UK Prime Minister Keir Starmer made a public commitment on February 14 to Laila Soueif, the mother of Alaa Abd El Fattah, stating “I will do all that I can to secure the release of her son Alaa Abd el-Fattah and reunite him with his family.” While that commitment was welcomed by the family, it is imperative that it now be followed up with concrete action. Laila has called on PM Starmer to speak directly to President Sisi of Egypt. Starmer has written to Sisi twice, in December and January, and his National Security Adviser, Jonathan Powell, discussed Alaa with Egyptian authorities in Cairo on January 2. UK authorities have not made public any further contact with Egypt since. “all she wants is for [Alaa] to be free now that he served the full five year sentence, and after they stole 11 years of his and [his son] Khaled’s life.” Laila, who has been on hunger strike since Alaa’s intended release date in September, was hospitalized on Monday night after her blood sugar dropped to worrying new levels. A letter published today from her NHS doctor states that there is now immediate risk to her life including further deterioration or death. Nevertheless, Laila remains steadfast in her commitment to refrain from eating until her son is freed. In the words of Alaa’s sister Mona Seif: “all she wants is for [Alaa] to be free now that he served the full five year sentence, and after they stole 11 years of his and [his son] Khaled’s life.” Alaa is a British citizen, and as such his government owes him more than mere lip service. The UK government can and must use every tactic available to them, including: Changing travel advice on the Foreign Office’s website to reflect the fact that citizens arrested in Egypt cannot be guaranteed consular access Convening a joint meeting of ministers and officials of the Foreign, Commonwealth and Development Office; Ministry of Defence; and Department of Business and Trade to discuss a unified strategy toward Alaa’s case Summoning the Egyptian ambassador in London and restricting his access to Whitehall if Alaa is not released and returned to the UK Announcing a moratorium on any governmental assistance or promotion of new Foreign Direct Investments into Egypt, as called for by 15 NGOs in November. EFF once again calls on Prime Minister Starmer to pick up the phone and call Egyptian President Sisi to free Alaa and save Laila—before it’s too late.
Earlier this month, the Senate passed the TAKE IT DOWN Act (S. 146), by a voice vote. The bill is meant to speed up the removal of non-consensual intimate imagery, or NCII, including videos that imitate real people, a technology sometimes called “deepfakes.” Protecting victims of these heinous privacy invasions is a legitimate goal. But good intentions alone are not enough to make good policy. As currently drafted, the TAKE IT DOWN Act mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without addressing the problem it claims to solve. This misguided bill can still be stopped in the House of Representatives. Help us speak out against it now. take action "Take It Down" Has No real Safeguards Before this vote, EFF, along with the Center for Democracy & Technology (CDT), Authors Guild, Demand Progress Action, Fight for the Future, Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, Restore The Fourth, SIECUS: Sex Ed for Social Change, TechFreedom, and Woodhull Freedom Foundation, sent a letter to the Senate, asking them to change this legislation to protect legitimate speech that is not NCII. Changes are also needed to protect users who rely on encrypted services. The letter explains that the bill’s “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The legislation requires that apps and websites remove content within 48 hours, meaning that online service providers, particularly smaller ones, will have to comply so quickly to avoid legal risk that they won’t be able to verify claims This would likely lead to the use of often-inaccurate automated filters that are infamous for flagging legal content, from fair-use commentary to news reporting. Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers cannot view the contents of messages on their platforms. Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces. Congress should focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime that is bound to be abused. Tell your Member of Congress to oppose censorship and to oppose S. 146. take action Tell the house to stop "Take it down" Further reading: EFF and allies letter opposing S. 146, the TAKE IT DOWN Act.
With the rise of digital surveillance, securing our health data is no longer just a privacy issue—it's a matter of personal safety. In the wake of the Supreme Court's reversal of Roe v. Wade and the growing restrictions on abortion and gender-affirming care, protecting our personal health data has never been more important. And in a world where nearly half of U.S. states have either banned or are on the brink of banning abortion, unfettered access to personal health data is an even more dangerous threat. That’s why EFF joins the New York Civil Liberties Union (NYCLU) in urging Governor Hochul to sign the New York Health Information Privacy Act (A.2141/S.929). This legislation is a crucial step toward safeguarding the digital privacy of New Yorkers at a time when health data is increasingly vulnerable to misuse. Why Health Data Privacy Matters When individuals seek reproductive health care or gender-affirming care, they leave behind a digital trail. Whether through search histories, email exchanges, travel itineraries, or data from period-tracker apps and smartwatches, every click, every action, and every step is tracked, often with little or no consent. And this kind of data—however collected—has already been used to criminalize individuals who were simply seeking health care. Unlike HIPAA, which regulates 'covered entities'—providers of treatment, payors/insurers—who are part of the traditional health care system and their ‘business associates,’ this bill would expand its reach to cover a broad range of 'new' entities. These include data brokers, tech companies, and others in the digital ecosystem, who can access and share this sensitive health information. The result is a growing web of entities collecting personal data, far beyond the scope of traditional health care providers. For example, in some states, individuals have been investigated or even prosecuted based on their digital data, simply for obtaining abortion care. In a world where our health choices are increasingly monitored, the need for robust privacy protections is clearer than ever. The New York Health Information Privacy Act is the Empire State’s opportunity to lead the nation in protecting its residents. What Does the Health Information Privacy Act Do? At its core, the New York Health Information Privacy Act would provide vital protections for New Yorkers' electronic health data. Here’s what the bill does: Prohibits the sale of health data: Health data is not a commodity to be bought and sold. This bill ensures that your most personal information is not used for profit by commercial entities without your consent. Requires explicit consent: Before health data is processed, New Yorkers will need to provide clear, informed consent. The bill limits processing (storing, collecting, using) of personal data to “strictly necessary” purposes only, minimizing unnecessary collection. Data deletion rights: Health data will be deleted by default after 60 days, unless the individual requests otherwise. This empowers individuals to control their data, ensuring that unnecessary information doesn’t linger. Non-discrimination protections: Individuals will not face discrimination or higher costs for exercising their privacy rights. No one should be penalized for wanting to protect their personal information. Why New York Needs This Bill Now The need for these protections is urgent. As digital surveillance expands, so does the risk of personal health data being used against individuals. In a time when personal health decisions are under attack, it’s crucial that New Yorkers have control over their health information. By signing this bill, Governor Hochul would ensure that out-of-state actors cannot easily access New Yorkers’ health data without due process, protecting individuals from legal actions in states that criminalize reproductive and gender-affirming care. However, this bill still faces a critical shortcoming—the absence of a private right of action (PRA). Without it, individuals cannot directly sue companies for privacy violations, leaving them vulnerable. Accountability would fall solely on the Attorney General, who would need the resources to quickly and consistently enforce the new law. Nonetheless, the Attorney General’s role will now be critical in ensuring this bill is upheld, and they must remain steadfast in implementing these protections effectively. Governor Hochul: Sign A.2141/S.929 The importance of this legislation cannot be overstated—it is about protecting people from potential legal actions related to their health care decisions. By signing this bill, Governor Hochul would solidify New York’s position as a leader in health data privacy and take a firm stand against the misuse of personal information. New York has the power to protect its residents and set a strong precedent for privacy protections across the nation. Let’s ensure that personal health data remains in the hands of those who own it—the individuals themselves. Governor Hochul: This is your chance to make a difference. Let’s take action now to protect what matters most—our health, our data, and our rights. Sign A.2141/ S.929 today.
EFF does a lot of things, including impact litigation, legislative lobbying, and technology development, all to fight for your civil liberties in the digital age. With litigation, we directly represent clients and also file “amicus” briefs in court cases. An amicus brief, also called a “friend-of-the-court” brief, is when we don’t represent one of the parties on either side of the “v”—instead, we provide the court with a helpful outside perspective on the case, either on behalf of ourselves or other groups, that can help the court make its decision. Amicus briefs are a core part of EFF’s legal work. Over the years, courts at all levels have extensively engaged with and cited our amicus briefs, showing that they value our thoughtful legal analysis, technical expertise, and public interest mission. Unfortunately the Judicial Conference—the body that oversees the federal court system—has proposed changes to the rule governing amicus briefs (Federal Rule of Appellate Procedure 29) that would make it harder to file such briefs in the circuit courts. EFF filed comments with the Judicial Conference sharing our thoughts on the proposed rule changes (a total of 407 comments were filed). Two proposed changes are particularly concerning. First, amicus briefs would be “disfavored” if they address issues “already mentioned” by the parties. This language is extremely broad and may significantly reduce the amount and types of amicus briefs that are filed in the circuit courts. As we said in our comments: We often file amicus briefs that expand upon issues only briefly addressed by the parties, either because of lack of space given other issues that party counsel must also address on appeal, or a lack of deep expertise by party counsel on a specific issue that EFF specializes in. We see this often in criminal appeals when we file in support of the defendant. We also file briefs that address issues mentioned by the parties but additionally explain how the relevant technology works or how the outcome of the case will impact certain other constituencies. We then shared examples of EFF amicus briefs that may have been disfavored if the “already mentioned” standard had been in effect, even though our briefs provided help to the courts. Just two examples are: In United States v. Cano, we filed an amicus brief that addressed the core issue of the case—whether the border search exception to the Fourth Amendment’s warrant requirement applies to cell phones. We provided a detailed explanation of the privacy interests in digital devices, and a thorough Fourth Amendment analysis regarding why a warrant should be required to search digital devices at the border. The Ninth Circuit extensively engaged with our brief to vacate the defendant’s conviction. In NetChoice, LLC v. Attorney General of Florida, a First Amendment case about social media content moderation (later considered by the Supreme Court), we filed an amicus brief that elaborated on points only briefly made by the parties about the prevalence of specialized social media services reflecting a wide variety of subject matter focuses and political viewpoints. Several of the examples we provided were used by the 11th Circuit in its opinion. Second, the proposed rules would require an amicus organization (or person) to file a motion with the court and get formal approval before filing an amicus brief. This would replace the current rule, which also allows an amicus brief to be filed if both parties in the case consent (which is commonly what happens). As we stated in our comments: “Eliminating the consent provision will dramatically increase motion practice for circuit courts, putting administrative burdens on the courts as well as amicus brief filers.” We also argued that this proposed change “is not in the interests of justice.” We wrote: Having to write and file a separate motion may disincentivize certain parties from filing amicus briefs, especially people or organizations with limited resources … The circuits should … facilitate the participation by diverse organizations at all stages of the appellate process—where appeals often do not just deal with discrete disputes between parties, but instead deal with matters of constitutional and statutory interpretation that will impact the rights of Americans for years to come. Amicus briefs are a crucial part of EFF’s work in defending your digital rights, and our briefs provide valuable arguments and expertise that help the courts make informed decisions. That’s why we are calling on the Judicial Conference to reject these changes and preserve our ability to file amicus briefs in the federal appellate courts that make a difference. Your support is essential in ensuring that we can continue to fight for your digital rights—in and out of court. DONATE TO EFF
Early in January 2025 it seemed like TikTok was on the verge of being banned by the U.S. government. In reaction to this imminent ban, several million people in the United States signed up for a different China-based social network known in the U.S. as RedNote, and in China as Xiaohongshu (小红书/ 小紅書; which translates to Little Red Book). RedNote is an application and social network created in 2013 that currently has over 300 million users. Feature-wise, it is most comparable to Instagram and is primarily used for sharing pictures, videos, and shopping. The vast majority of its users live in China, are born after 1990, and are women. Even before the influx of new users in January, RedNote has historically had many users outside of China, primarily people from the Chinese diaspora who have friends and relatives on the network. RedNote is largely funded by two major Chinese tech corporations: Tencent and Alibaba. When millions of U.S. based users started flocking to the application, the traditional rounds of pearl clutching and concern trolling began. Many people raised the alarm about U.S. users entrusting their data with a Chinese company, and it is implied, the Chinese Communist Party. The reaction from U.S. users was an understandable, if unfortunate, bit of privacy nihilism. People responded that they, “didn’t care if someone in China was getting their data since US companies such as Meta and Google had already stolen their data anyway.” “What is the difference,” people argued, “between Meta having my data and someone in China? How does this affect me in any way?” Even if you don’t care about giving China your data, it is not safe to use any application that doesn’t use encryption by default. Last week, The Citizen Lab at The Munk School Of Global Affairs, University of Toronto, released a report authored by Mona Wang, Jeffrey Knockel, and Irene Poetranto which highlights three serious security issues in the RedNote app. The most concerning finding from Citizen Lab is a revelation that RedNote retrieves uploaded user content over plaintext http. This means that anyone else on your network, at your internet service provider, or organizations like the NSA, can see everything you look at and upload to RedNote. Moreover someone could intercept that request and replace it with their own media or even an exploit to install malware on your device. In light of this report the EFF Threat Lab decided to confirm the CItizen Lab findings and do some additional privacy investigation of RedNote. We used static analysis techniques for our investigation, including manual static analysis of decompiled source code, and automated scanners including MobSF and Exodus Privacy. We only analyzed Version 8.59.5 of RedNote for Android downloaded from the website APK Pure. EFF has independently confirmed the finding that Red Note retrieves posted content over plaintext http. Due to this lack of even basic transport layer encryption we don’t think this application is safe for anyone to use. Even if you don’t care about giving China your data, it is not safe to use any application that doesn’t use encryption by default. Citizen Lab researchers also found that users’ file contents are readable by network attackers. We were able to confirm that RedNote encrypts several sensitive files with static keys which are present in the app and the same across all installations of the app, meaning anyone who was able to retrieve those keys from a decompiled version of the app could decrypt these sensitive files for any user of the application. The Citizen Lab report also found a vulnerability where an attacker could identify the contents of any file readable by the application. This was out of scope for us to test but we find no reason to doubt this claim. The third major finding by Citizen Lab was that RedNote transmits device metadata in a way that can be eavesdropped on by network attackers, sometimes without encryption at all, and sometimes in a way vulnerable to a machine-in-the middle attack. We can confirm that RedNote does not validate HTTPS certificates properly. Testing this vulnerability was out of scope for EFF, but we find no reason to doubt this claim. Permissions and Trackers EFF performed further analysis of the permissions and trackers requested by RedNote. Our findings indicate two other potential privacy issues with the application. RedNote requests some very sensitive permissions, including location information, even when the app is not running in the foreground. This permission is not requested by other similar apps such as TikTok, Facebook, or Instagram. We also found, using an online scanner for tracking software called Exodus Privacy, that RedNote is not a platform which will protect its users from U.S.-based surveillance capitalism. In addition to sharing userdata with the Chinese companies Tencent and ByteDance, it also shares user data with Facebook and Google. Other Issues RedNote contains functionality to update its own code after it’s downloaded from the Google Play store using an open source library called APK Patch. This could be used to inject malicious code into the application after it has been downloaded without such code being revealed in automated scans meant to protect against malicious applications being uploaded to official stores, like Google Play. Recommendations Due to the lack of encryption we do not consider it safe for anyone to run this app. If you are going to use RedNote, we recommend doing so with the absolute minimum set of permissions necessary for the app to function (see our guides for iPhone and Android.) At least a part of this blame falls on Google. Android needs to stop allowing apps to make unencrypted requests at all. Due to the lack of encryption we do not consider it safe for anyone to run this app. RedNote should immediately take steps to encrypt all traffic from their application and remove the permission for background location information. Users should also keep in mind that RedNote is not a platform which values free speech. It’s a heavily censored application where topics such as political speech, drugs and addiction, and sexuality are more tightly controlled than similar social networks. Since it shares data with Facebook and Google ad networks, RedNote users should also keep in mind that it’s not a platform that protects you from U.S.-based surveillance capitalism. The willingness of users to so quickly move to RedNote also highlights the fact that people are hungry for platforms that aren't controlled by the same few American tech oligarchs. People will happily jump to another platform even if it presents new, unknown risks; or is controlled by foreign tech oligarchs such as Tencent and Alibaba. However, federal bans of such applications are not the correct answer. When bans are targeted at specific platforms such as TikTok, Deepseek, and RedNote rather than privacy-invasive practices such as sharing sensitive details with surveillance advertising platforms, users who cannot participate on the banned platform may still have their privacy violated when they flock to other platforms. The real solution to the potential privacy harms of apps like RedNote is to ensure (through technology, regulation, and law) that people’s sensitive information isn’t entered into the surveillance capitalist data stream in the first place. We need a federal, comprehensive, consumer-focused privacy law. Our government is failing to address the fundamental harms of privacy-invading social media. Implementing xenophobic, free-speech infringing policy is having the unintended consequence of driving folks to platforms with even more aggressive censorship. This outcome was foreseeable. Rather than a knee-jerk reaction banning the latest perceived threat, these issues could have been avoided by addressing privacy harms at the source and enacting strong consumer-protection laws. Figure 1. Permissions requested by RedNote Permission Description android.permission.ACCESS_BACKGROUND_LOCATION This app can access location at any time, even while the app is not in use. android.permission.ACCESS_COARSE_LOCATION This app can get your approximate location from location services while the app is in use. Location services for your device must be turned on for the app to get location. android.permission.ACCESS_FINE_LOCATION This app can get your precise location from location services while the app is in use. Location services for your device must be turned on for the app to get location. This may increase battery usage. android.permission.ACCESS_MEDIA_LOCATION Allows the app to read locations from your media collection. android.permission.ACCESS_NETWORK_STATE Allows the app to view information about network connections such as which networks exist and are connected. android.permission.ACCESS_WIFI_STATE Allows the app to view information about Wi-Fi networking, such as whether Wi-Fi is enabled and name of connected Wi-Fi devices. android.permission.AUTHENTICATE_ACCOUNTS Allows the app to use the account authenticator capabilities of the AccountManager, including creating accounts and getting and setting their passwords. android.permission.BLUETOOTH Allows the app to view the configuration of the Bluetooth on the phone, and to make and accept connections with paired devices. android.permission.BLUETOOTH_ADMIN Allows the app to configure the local Bluetooth phone, and to discover and pair with remote devices. android.permission.BLUETOOTH_CONNECT Allows the app to connect to paired Bluetooth devices android.permission.CAMERA This app can take pictures and record videos using the camera while the app is in use. android.permission.CHANGE_NETWORK_STATE Allows the app to change the state of network connectivity. android.permission.CHANGE_WIFI_STATE Allows the app to connect to and disconnect from Wi-Fi access points and to make changes to device configuration for Wi-Fi networks. android.permission.EXPAND_STATUS_BAR Allows the app to expand or collapse the status bar. android.permission.FLASHLIGHT Allows the app to control the flashlight. android.permission.FOREGROUND_SERVICE Allows the app to make use of foreground services. android.permission.FOREGROUND_SERVICE_DATA_SYNC Allows the app to make use of foreground services with the type dataSync android.permission.FOREGROUND_SERVICE_LOCATION Allows the app to make use of foreground services with the type location android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK Allows the app to make use of foreground services with the type mediaPlayback android.permission.FOREGROUND_SERVICE_MEDIA_PROJECTION Allows the app to make use of foreground services with the type mediaProjection android.permission.FOREGROUND_SERVICE_MICROPHONE Allows the app to make use of foreground services with the type microphone android.permission.GET_ACCOUNTS Allows the app to get the list of accounts known by the phone. This may include any accounts created by applications you have installed. android.permission.INTERNET Allows the app to create network sockets and use custom network protocols. The browser and other applications provide means to send data to the internet, so this permission is not required to send data to the internet. android.permission.MANAGE_ACCOUNTS Allows the app to perform operations like adding and removing accounts, and deleting their password. android.permission.MANAGE_MEDIA_PROJECTION Allows an application to manage media projection sessions. These sessions can provide applications the ability to capture display and audio contents. Should never be needed by normal apps. android.permission.MODIFY_AUDIO_SETTINGS Allows the app to modify global audio settings such as volume and which speaker is used for output. android.permission.POST_NOTIFICATIONS Allows the app to show notifications android.permission.READ_CALENDAR This app can read all calendar events stored on your phone and share or save your calendar data. android.permission.READ_CONTACTS Allows the app to read data about your contacts stored on your phone. Apps will also have access to the accounts on your phone that have created contacts. This may include accounts created by apps you have installed. This permission allows apps to save your contact data, and malicious apps may share contact data without your knowledge. android.permission.READ_EXTERNAL_STORAGE Allows the app to read the contents of your shared storage. android.permission.READ_MEDIA_AUDIO Allows the app to read audio files from your shared storage. android.permission.READ_MEDIA_IMAGES Allows the app to read image files from your shared storage. android.permission.READ_MEDIA_VIDEO Allows the app to read video files from your shared storage. android.permission.READ_PHONE_STATE Allows the app to access the phone features of the device. This permission allows the app to determine the phone number and device IDs, whether a call is active, and the remote number connected by a call. android.permission.READ_SYNC_SETTINGS Allows the app to read the sync settings for an account. For example, this can determine whether the People app is synced with an account. android.permission.RECEIVE_BOOT_COMPLETED Allows the app to have itself started as soon as the system has finished booting. This can make it take longer to start the phone and allow the app to slow down the overall phone by always running. android.permission.RECEIVE_USER_PRESENT Unknown permission from android reference android.permission.RECORD_AUDIO This app can record audio using the microphone while the app is in use. android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS Allows an app to ask for permission to ignore battery optimizations for that app. android.permission.REQUEST_INSTALL_PACKAGES Allows an application to request installation of packages. android.permission.SCHEDULE_EXACT_ALARM This app can schedule work to happen at a desired time in the future. This also means that the app can run when youu2019re not actively using the device. android.permission.SYSTEM_ALERT_WINDOW This app can appear on top of other apps or other parts of the screen. This may interfere with normal app usage and change the way that other apps appear. android.permission.USE_CREDENTIALS Allows the app to request authentication tokens. android.permission.VIBRATE Allows the app to control the vibrator. android.permission.WAKE_LOCK Allows the app to prevent the phone from going to sleep. android.permission.WRITE_CALENDAR This app can add, remove, or change calendar events on your phone. This app can send messages that may appear to come from calendar owners, or change events without notifying their owners. android.permission.WRITE_CLIPBOARD_SERVICE Unknown permission from android reference android.permission.WRITE_EXTERNAL_STORAGE Allows the app to write the contents of your shared storage. android.permission.WRITE_SETTINGS Allows the app to modify the system's settings data. Malicious apps may corrupt your system's configuration. android.permission.WRITE_SYNC_SETTINGS Allows an app to modify the sync settings for an account. For example, this can be used to enable sync of the People app with an account. cn.org.ifaa.permission.USE_IFAA_MANAGER Unknown permission from android reference com.android.launcher.permission.INSTALL_SHORTCUT Allows an application to add Homescreen shortcuts without user intervention. com.android.launcher.permission.READ_SETTINGS Unknown permission from android reference com.asus.msa.SupplementaryDID.ACCESS Unknown permission from android reference com.coloros.mcs.permission.RECIEVE_MCS_MESSAGE Unknown permission from android reference com.google.android.gms.permission.AD_ID Unknown permission from android reference com.hihonor.push.permission.READ_PUSH_NOTIFICATION_INFO Unknown permission from android reference com.hihonor.security.permission.ACCESS_THREAT_DETECTION Unknown permission from android reference com.huawei.android.launcher.permission.CHANGE_BADGE Unknown permission from android reference com.huawei.android.launcher.permission.READ_SETTINGS Unknown permission from android reference com.huawei.android.launcher.permission.WRITE_SETTINGS Unknown permission from android reference com.huawei.appmarket.service.commondata.permission.GET_COMMON_DATA Unknown permission from android reference com.huawei.meetime.CAAS_SHARE_SERVICE Unknown permission from android reference com.meizu.c2dm.permission.RECEIVE Unknown permission from android reference com.meizu.flyme.push.permission.RECEIVE Unknown permission from android reference com.miui.home.launcher.permission.INSTALL_WIDGET Unknown permission from android reference com.open.gallery.smart.Provider Unknown permission from android reference com.oplus.metis.factdata.permission.DATABASE Unknown permission from android reference com.oplus.permission.safe.AI_APP Unknown permission from android reference com.vivo.identifier.permission.OAID_STATE_DIALOG Unknown permission from android reference com.vivo.notification.permission.BADGE_ICON Unknown permission from android reference com.xiaomi.dist.permission.ACCESS_APP_HANDOFF Unknown permission from android reference com.xiaomi.dist.permission.ACCESS_APP_META Unknown permission from android reference com.xiaomi.security.permission.ACCESS_XSOF Unknown permission from android reference com.xingin.xhs.permission.C2D_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.JOPERATE_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.JPUSH_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.MIPUSH_RECEIVE Unknown permission from android reference com.xingin.xhs.permission.PROCESS_PUSH_MSG Unknown permission from android reference com.xingin.xhs.permission.PUSH_PROVIDER Unknown permission from android reference com.xingin.xhs.push.permission.MESSAGE Unknown permission from android reference freemme.permission.msa Unknown permission from android reference freemme.permission.msa.SECURITY_ACCESS Unknown permission from android reference getui.permission.GetuiService.com.xingin.xhs Unknown permission from android reference ohos.permission.ACCESS_SEARCH_SERVICE Unknown permission from android reference oplus.permission.settings.LAUNCH_FOR_EXPORT Unknown permission from android reference
Today, in response to the U.K.’s demands for a backdoor, Apple has stopped offering users in the U.K. Advanced Data Protection, an optional feature in iCloud that turns on end-to-end encryption for files, backups, and more. Had Apple complied with the U.K.’s original demands, they would have been required to create a backdoor not just for users in the U.K., but for people around the world, regardless of where they were or what citizenship they had. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud. This blanket, worldwide demand put Apple in an untenable position. Apple has long claimed it wouldn’t create a backdoor, and in filings to the U.K. government in 2023, the company specifically raised the possibility of disabling features like Advanced Data Protection as an alternative. Apple's decision to disable the feature for U.K. users could well be the only reasonable response at this point, but it leaves those people at the mercy of bad actors and deprives them of a key privacy-preserving technology. The U.K. has chosen to make its own citizens less safe and less free. Although the U.K. Investigatory Powers Act purportedly authorizes orders to compromise security like the one issued to Apple, policymakers in the United States are not entirely powerless. As Senator Ron Wyden and Representative Andy Biggs noted in a letter to the Director of National Intelligence (DNI) last week, the US and U.K. are close allies who have numerous cybersecurity- and intelligence-sharing agreements, but “the U.S. government must not permit what is effectively a foreign cyberattack waged through political means.” They pose a number of key questions, including whether the CLOUD Act—an “encryption-neutral” law that enables special status for the U.K. to request data directly from US companies—actually allows the sort of demands at issue here. We urge Congress and others in the US to pressure the U.K. to back down and to provide support for US companies to resist backdoor demands, regardless of what government issues them. Meanwhile, Apple is not the only company operating in the U.K. that offers end-to-end encryption backup features. For example, you can optionally enable end-to-end encryption for chat backups in WhatsApp or backups from Samsung Galaxy phones. Many cloud backup services offer similar protections, as do countless chat apps, like Signal, to secure conversations. We do not know if other companies have been approached with similar requests, but we hope they stand their ground as well. If you’re in the U.K. and have not enabled ADP, you can longer do so. If you have already enabled it, Apple will provide guidance soon about what to do. This change will not affect the end-to-end encryption used in Apple Messages, nor does it alter other data that’s end-to-end encrypted by default, like passwords and health data. But iCloud backups have long been a loophole for law enforcement to gain access to data otherwise not available to them on iPhones with device encryption enabled, including the contents of messages they’ve stored in the backup. Advanced Data Protection is an optional feature to close that loophole. Without it, U.K. users’ files and device backups will be accessible to Apple, and thus shareable with law enforcement. We appreciate Apple’s stance against the U.K. government’s request. Weakening encryption violates fundamental rights. We all have the right to private spaces, and any backdoor would annihilate that right. The U.K. must back down from these overreaching demands and allow Apple—and others—to provide the option for end-to-end encrypted cloud storage.
Early in January 2025 it seemed like TikTok was on the verge of being banned by the U.S. government. In reaction to this imminent ban, several million people in the United States signed up for a different China-based social network known in the U.S. as RedNote, and in China as Xiaohongshu (小红书/ 小紅書; which translates to Little Red Book). RedNote is an application and social network created in 2013 that currently has over 300 million users. Feature-wise, it is most comparable to Instagram and is primarily used for sharing pictures, videos, and shopping. The vast majority of its users live in China, are born after 1990, and are women. Even before the influx of new users in January, RedNote has historically had many users outside of China, primarily people from the Chinese diaspora who have friends and relatives on the network. RedNote is largely funded by two major Chinese tech corporations: Tencent and Alibaba. When millions of U.S. based users started flocking to the application, the traditional rounds of pearl clutching and concern trolling began. Many people raised the alarm about U.S. users entrusting their data with a Chinese company, and it is implied, the Chinese Communist Party. The reaction from U.S. users was an understandable, if unfortunate, bit of privacy nihilism. People responded that they, “didn’t care if someone in China was getting their data since US companies such as Meta and Google had already stolen their data anyway.” “What is the difference,” people argued, “between Meta having my data and someone in China? How does this affect me in any way?” Even if you don’t care about giving China your data, it is not safe to use any application that doesn’t use encryption by default. Last week, The Citizen Lab at The Munk School Of Global Affairs, University of Toronto, released a report authored by Mona Wang, Jeffrey Knockel, and Irene Poetranto which highlights three serious security issues in the RedNote app. The most concerning finding from Citizen Lab is a revelation that RedNote retrieves uploaded user content over plaintext http. This means that anyone else on your network, at your internet service provider, or organizations like the NSA, can see everything you look at and upload to RedNote. Moreover someone could intercept that request and replace it with their own media or even an exploit to install malware on your device. In light of this report the EFF Threat Lab decided to confirm the CItizen Lab findings and do some additional privacy investigation of RedNote. We used static analysis techniques for our investigation, including manual static analysis of decompiled source code, and automated scanners including MobSF and Exodus Privacy. We only analyzed Version 8.59.5 of RedNote for Android downloaded from the website APK Pure. EFF has independently confirmed the finding that Red Note retrieves posted content over plaintext http. Due to this lack of even basic transport layer encryption we don’t think this application is safe for anyone to use. Even if you don’t care about giving China your data, it is not safe to use any application that doesn’t use encryption by default. Citizen Lab researchers also found that users’ file contents are readable by network attackers. We were able to confirm that RedNote encrypts several sensitive files with static keys which are present in the app and the same across all installations of the app, meaning anyone who was able to retrieve those keys from a decompiled version of the app could decrypt these sensitive files for any user of the application. The Citizen Lab report also found a vulnerability where an attacker could identify the contents of any file readable by the application. This was out of scope for us to test but we find no reason to doubt this claim. The third major finding by Citizen Lab was that RedNote transmits device metadata in a way that can be eavesdropped on by network attackers, sometimes without encryption at all, and sometimes in a way vulnerable to a machine-in-the middle attack. We can confirm that RedNote does not validate HTTPS certificates properly. Testing this vulnerability was out of scope for EFF, but we find no reason to doubt this claim. Permissions and Trackers EFF performed further analysis of the permissions and trackers requested by RedNote. Our findings indicate two other potential privacy issues with the application. RedNote requests some very sensitive permissions, including location information, even when the app is not running in the foreground. This permission is not requested by other similar apps such as TikTok, Facebook, or Instagram. We also found, using an online scanner for tracking software called Exodus Privacy, that RedNote is not a platform which will protect its users from U.S.-based surveillance capitalism. In addition to sharing userdata with the Chinese companies Tencent and ByteDance, it also shares user data with Facebook and Google. Other Issues RedNote contains functionality to update its own code after it’s downloaded from the Google Play store using an open source library called APK Patch. This could be used to inject malicious code into the application after it has been downloaded without such code being revealed in automated scans meant to protect against malicious applications being uploaded to official stores, like Google Play. Recommendations Due to the lack of encryption we do not consider it safe for anyone to run this app. If you are going to use RedNote, we recommend doing so with the absolute minimum set of permissions necessary for the app to function (see our guides for iPhone and Android.) At least a part of this blame falls on Google. Android needs to stop allowing apps to make unencrypted requests at all. Due to the lack of encryption we do not consider it safe for anyone to run this app. RedNote should immediately take steps to encrypt all traffic from their application and remove the permission for background location information. Users should also keep in mind that RedNote is not a platform which values free speech. It’s a heavily censored application where topics such as political speech, drugs and addiction, and sexuality are more tightly controlled than similar social networks. Since it shares data with Facebook and Google ad networks, RedNote users should also keep in mind that it’s not a platform that protects you from U.S.-based surveillance capitalism. The willingness of users to so quickly move to RedNote also highlights the fact that people are hungry for platforms that aren't controlled by the same few American tech oligarchs. People will happily jump to another platform even if it presents new, unknown risks; or is controlled by foreign tech oligarchs such as Tencent and Alibaba. However, federal bans of such applications are not the correct answer. When bans are targeted at specific platforms such as TikTok, Deepseek, and RedNote rather than privacy-invasive practices such as sharing sensitive details with surveillance advertising platforms, users who cannot participate on the banned platform may still have their privacy violated when they flock to other platforms. The real solution to the potential privacy harms of apps like RedNote is to ensure (through technology, regulation, and law) that people’s sensitive information isn’t entered into the surveillance capitalist data stream in the first place. We need a federal, comprehensive, consumer-focused privacy law. Our government is failing to address the fundamental harms of privacy-invading social media. Implementing xenophobic, free-speech infringing policy is having the unintended consequence of driving folks to platforms with even more aggressive censorship. This outcome was foreseeable. Rather than a knee-jerk reaction banning the latest perceived threat, these issues could have been avoided by addressing privacy harms at the source and enacting strong consumer-protection laws. Figure 1. Permissions requested by RedNote Permission Description android.permission.ACCESS_BACKGROUND_LOCATION This app can access location at any time, even while the app is not in use. android.permission.ACCESS_COARSE_LOCATION This app can get your approximate location from location services while the app is in use. Location services for your device must be turned on for the app to get location. android.permission.ACCESS_FINE_LOCATION This app can get your precise location from location services while the app is in use. Location services for your device must be turned on for the app to get location. This may increase battery usage. android.permission.ACCESS_MEDIA_LOCATION Allows the app to read locations from your media collection. android.permission.ACCESS_NETWORK_STATE Allows the app to view information about network connections such as which networks exist and are connected. android.permission.ACCESS_WIFI_STATE Allows the app to view information about Wi-Fi networking, such as whether Wi-Fi is enabled and name of connected Wi-Fi devices. android.permission.AUTHENTICATE_ACCOUNTS Allows the app to use the account authenticator capabilities of the AccountManager, including creating accounts and getting and setting their passwords. android.permission.BLUETOOTH Allows the app to view the configuration of the Bluetooth on the phone, and to make and accept connections with paired devices. android.permission.BLUETOOTH_ADMIN Allows the app to configure the local Bluetooth phone, and to discover and pair with remote devices. android.permission.BLUETOOTH_CONNECT Allows the app to connect to paired Bluetooth devices android.permission.CAMERA This app can take pictures and record videos using the camera while the app is in use. android.permission.CHANGE_NETWORK_STATE Allows the app to change the state of network connectivity. android.permission.CHANGE_WIFI_STATE Allows the app to connect to and disconnect from Wi-Fi access points and to make changes to device configuration for Wi-Fi networks. android.permission.EXPAND_STATUS_BAR Allows the app to expand or collapse the status bar. android.permission.FLASHLIGHT Allows the app to control the flashlight. android.permission.FOREGROUND_SERVICE Allows the app to make use of foreground services. android.permission.FOREGROUND_SERVICE_DATA_SYNC Allows the app to make use of foreground services with the type dataSync android.permission.FOREGROUND_SERVICE_LOCATION Allows the app to make use of foreground services with the type location android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK Allows the app to make use of foreground services with the type mediaPlayback android.permission.FOREGROUND_SERVICE_MEDIA_PROJECTION Allows the app to make use of foreground services with the type mediaProjection android.permission.FOREGROUND_SERVICE_MICROPHONE Allows the app to make use of foreground services with the type microphone android.permission.GET_ACCOUNTS Allows the app to get the list of accounts known by the phone. This may include any accounts created by applications you have installed. android.permission.INTERNET Allows the app to create network sockets and use custom network protocols. The browser and other applications provide means to send data to the internet, so this permission is not required to send data to the internet. android.permission.MANAGE_ACCOUNTS Allows the app to perform operations like adding and removing accounts, and deleting their password. android.permission.MANAGE_MEDIA_PROJECTION Allows an application to manage media projection sessions. These sessions can provide applications the ability to capture display and audio contents. Should never be needed by normal apps. android.permission.MODIFY_AUDIO_SETTINGS Allows the app to modify global audio settings such as volume and which speaker is used for output. android.permission.POST_NOTIFICATIONS Allows the app to show notifications android.permission.READ_CALENDAR This app can read all calendar events stored on your phone and share or save your calendar data. android.permission.READ_CONTACTS Allows the app to read data about your contacts stored on your phone. Apps will also have access to the accounts on your phone that have created contacts. This may include accounts created by apps you have installed. This permission allows apps to save your contact data, and malicious apps may share contact data without your knowledge. android.permission.READ_EXTERNAL_STORAGE Allows the app to read the contents of your shared storage. android.permission.READ_MEDIA_AUDIO Allows the app to read audio files from your shared storage. android.permission.READ_MEDIA_IMAGES Allows the app to read image files from your shared storage. android.permission.READ_MEDIA_VIDEO Allows the app to read video files from your shared storage. android.permission.READ_PHONE_STATE Allows the app to access the phone features of the device. This permission allows the app to determine the phone number and device IDs, whether a call is active, and the remote number connected by a call. android.permission.READ_SYNC_SETTINGS Allows the app to read the sync settings for an account. For example, this can determine whether the People app is synced with an account. android.permission.RECEIVE_BOOT_COMPLETED Allows the app to have itself started as soon as the system has finished booting. This can make it take longer to start the phone and allow the app to slow down the overall phone by always running. android.permission.RECEIVE_USER_PRESENT Unknown permission from android reference android.permission.RECORD_AUDIO This app can record audio using the microphone while the app is in use. android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS Allows an app to ask for permission to ignore battery optimizations for that app. android.permission.REQUEST_INSTALL_PACKAGES Allows an application to request installation of packages. android.permission.SCHEDULE_EXACT_ALARM This app can schedule work to happen at a desired time in the future. This also means that the app can run when youu2019re not actively using the device. android.permission.SYSTEM_ALERT_WINDOW This app can appear on top of other apps or other parts of the screen. This may interfere with normal app usage and change the way that other apps appear. android.permission.USE_CREDENTIALS Allows the app to request authentication tokens. android.permission.VIBRATE Allows the app to control the vibrator. android.permission.WAKE_LOCK Allows the app to prevent the phone from going to sleep. android.permission.WRITE_CALENDAR This app can add, remove, or change calendar events on your phone. This app can send messages that may appear to come from calendar owners, or change events without notifying their owners. android.permission.WRITE_CLIPBOARD_SERVICE Unknown permission from android reference android.permission.WRITE_EXTERNAL_STORAGE Allows the app to write the contents of your shared storage. android.permission.WRITE_SETTINGS Allows the app to modify the system's settings data. Malicious apps may corrupt your system's configuration. android.permission.WRITE_SYNC_SETTINGS Allows an app to modify the sync settings for an account. For example, this can be used to enable sync of the People app with an account. cn.org.ifaa.permission.USE_IFAA_MANAGER Unknown permission from android reference com.android.launcher.permission.INSTALL_SHORTCUT Allows an application to add Homescreen shortcuts without user intervention. com.android.launcher.permission.READ_SETTINGS Unknown permission from android reference com.asus.msa.SupplementaryDID.ACCESS Unknown permission from android reference com.coloros.mcs.permission.RECIEVE_MCS_MESSAGE Unknown permission from android reference com.google.android.gms.permission.AD_ID Unknown permission from android reference com.hihonor.push.permission.READ_PUSH_NOTIFICATION_INFO Unknown permission from android reference com.hihonor.security.permission.ACCESS_THREAT_DETECTION Unknown permission from android reference com.huawei.android.launcher.permission.CHANGE_BADGE Unknown permission from android reference com.huawei.android.launcher.permission.READ_SETTINGS Unknown permission from android reference com.huawei.android.launcher.permission.WRITE_SETTINGS Unknown permission from android reference com.huawei.appmarket.service.commondata.permission.GET_COMMON_DATA Unknown permission from android reference com.huawei.meetime.CAAS_SHARE_SERVICE Unknown permission from android reference com.meizu.c2dm.permission.RECEIVE Unknown permission from android reference com.meizu.flyme.push.permission.RECEIVE Unknown permission from android reference com.miui.home.launcher.permission.INSTALL_WIDGET Unknown permission from android reference com.open.gallery.smart.Provider Unknown permission from android reference com.oplus.metis.factdata.permission.DATABASE Unknown permission from android reference com.oplus.permission.safe.AI_APP Unknown permission from android reference com.vivo.identifier.permission.OAID_STATE_DIALOG Unknown permission from android reference com.vivo.notification.permission.BADGE_ICON Unknown permission from android reference com.xiaomi.dist.permission.ACCESS_APP_HANDOFF Unknown permission from android reference com.xiaomi.dist.permission.ACCESS_APP_META Unknown permission from android reference com.xiaomi.security.permission.ACCESS_XSOF Unknown permission from android reference com.xingin.xhs.permission.C2D_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.JOPERATE_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.JPUSH_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.MIPUSH_RECEIVE Unknown permission from android reference com.xingin.xhs.permission.PROCESS_PUSH_MSG Unknown permission from android reference com.xingin.xhs.permission.PUSH_PROVIDER Unknown permission from android reference com.xingin.xhs.push.permission.MESSAGE Unknown permission from android reference freemme.permission.msa Unknown permission from android reference freemme.permission.msa.SECURITY_ACCESS Unknown permission from android reference getui.permission.GetuiService.com.xingin.xhs Unknown permission from android reference ohos.permission.ACCESS_SEARCH_SERVICE Unknown permission from android reference oplus.permission.settings.LAUNCH_FOR_EXPORT Unknown permission from android reference
A bill headed to the Senate floor in Utah would require officers to disclose if a police report was written by generative AI. The bill, S.B. 180, requires a department to have a policy governing the use of AI. This policy would mandate that police reports created in whole or in part by generative AI have a disclaimer that the report contains content generated by AI and requires officers to legally certify that the report was checked for accuracy. S.B. 180 is unfortunately a necessary step in the right direction when it comes to regulating the rapid spread of police using generative AI to write their narrative reports for them. EFF will continue to monitor this bill in hopes that it will be part of a larger conversation about more robust regulations. Specifically, Axon, the makers of tasers and the salespeople behind a shocking amount of police and surveillance tech, has recently rolled out a new product, Draft One, which uses body-worn camera audio to generate police reports. This product is spreading quickly in part because it is integrated with other Axon products which are already omnipresent in U.S. society. But it’s going to take more than a disclaimer to curb the potential harms of AI-generated police reports. As we’ve previously cautioned, the public should be skeptical of AI’s ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms, and slang people use. As online content moderation has shown, software may have a passable ability to capture words, but it often struggles with content and meaning. In a tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change the content of a police report. Moreover, so-called artificial intelligence taking over consequential tasks and decision-making has the power to obscure human agency. Police officers who deliberately exaggerate or lie to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply did not capture what was happening in the chaotic video. As this technology spreads without much transparency, oversight, or guardrails, we are likely to see more cities, counties, and states push back against its use. Out of fear that AI-generated reports would complicate and compromise cases in the criminal justice system,prosecutors in King County, Washington (which includes Seattle) have instructed officers not to use the technology for now. The use of AI to write police reports is troubling in ways we are accustomed to, but also in new ways. Not only do we not yet know how widespread use of this technology will affect the criminal justice system, but because of how the product is designed, there is a chance we won’t even know if AI has been used even if we are staring directly at the police report in question. For that reason, it’s no surprise that lawmakers in Utah have introduced this bill to require some semblance of transparency. We will likely see similar regulations and restrictions in other states and local jurisdictions, and possibly even stronger ones.
EFF is delighted to be attending RightsCon again—this year hosted in Taipei, Taiwan between 24-27 February. RightsCon provides an opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions. Many EFFers are heading to Taipei and will be actively participating in this year's event. Several members will be leading sessions, speaking on panels, and be available for networking. Our delegation includes: Alexis Hancock, Director of Engineering, Certbot Babette Ngene, Public Interest Technology Director Christoph Schmon, International Policy Director Cindy Cohn, Executive Director Daly Barnett, Senior Staff Technologist David Greene, Senior Staff Attorney and Civil Liberties Director Jillian York, Director of International Freedom of Expression Karen Gullo, Senior Writer for Free Speech and Privacy Paige Collings, Senior Speech and Privacy Activist Svea Windwehr, Assistant Director of EU Policy Veridiana Alimonti, Associate Director For Latin American Policy We hope you’ll have the opportunity to connect with us during the conference, especially at the following sessions: Day 0 (Monday 24 February) Mutual Support: Amplifying the Voices of Digital Rights Defenders in Taiwan and East Asia 09:00 - 12:30, Room 101C Alexis Hancock, Director of Engineering, Certbot Host institutions: Open Culture Foundation, Odditysay Labs, Citizen Congress Watch and FLAME This event aims to present Taiwan and East Asia’s digital rights landscape, highlighting current challenges faced by digital rights defenders and fostering resonance with participants' experiences. Join to engage in insightful discussions, learn from Taiwan’s tech community and civil society, and contribute to the global dialogue on these pressing issues. The form to register is here. Platform accountability in crisis? Global perspective on platform accountability frameworks 09:00 - 13:00, Room 202A Christoph Schmon, International Policy Director; Babette Ngene, Public Interest Technology Director Host institutions: Electronic Frontier Foundation (EFF), Access Now This high level panel will reflect on alarming developments in platforms' content policies and their enforcement, and discuss whether existing frameworks offer meaningful tools to counter the current platform accountability crisis. The starting point for the discussion will be Access Now's recently launched report Platform accountability: a rule-of-law checklist for policymakers. The panel will be followed by a workshop, dedicated to the “Draft Viennese Principles for Embedding Global Considerations into Human-Rights-Centred DSA enforcement”. Facilitated by the DSA Human Rights Alliance, the workshop will provide a safe space for civil society organisations to strategize and discuss necessary elements of a human rights based approach to platform governance. Day 1 (Tuesday 25 February) Criminalization of Tor in Ola Bini’s case? Lessons for digital experts in the Global South 09:00 - 10:00 (online) Veridiana Alimonti, Associate Director For Latin American Policy Host institutions: Access Now, Centro de Autonomía Digital (CAD), Observation Mission of the Ola Bini Case, Tor Project This session will analyze how the use of Tor is criminalized in Ola Bini´s case and its implications for digital experts in other contexts of criminalization in the Global South, especially when they defend human rights online. Participants will work through various exercises to: 1- Analyze, from a technical perspective, the judicial criminalization of Tor in Ola Bini´s case, and 2- Collectively analyze how its criminalization can affect (judicially) the work of digital experts from the Global South and discuss possible support alternatives. The counter-surveillance supply chain 11:30am - 12:30, Room 201F Babette Ngene, Public Interest Technology Director Host institution: Meta The fight against surveillance and other malicious cyber adversaries is a whole-of-society problem, requiring international norms and policies, in-depth research, platform-level defenses, investigation, and detection. This dialogue focuses on the critical first link in this counter-surveillance supply chain; the on the ground organizations around the world who are the first contact for local activists and organizations dealing with targeted malware, and will include an open discussion on how to improve the global response to surveillance and surveillance-for-hire actors through a lens of local contextual knowledge and information sharing. Day 3 (Wednesday 26 February) Derecho a no ser objeto de decisiones automatizadas: desafíos y regulaciones en el sector judicial 16:30 - 17:30, Room 101C Veridiana Alimonti, Associate Director For Latin American Policy Host institutions: Hiperderecho, Red en Defensa de los Derechos Digitales, Instituto Panamericano de Derecho y Tecnología A través de este panel se analizarán casos específicos de México, Perú y Colombia para comprender las implicaciones éticas y jurídicas del uso de la inteligencia artificial en la redacción y motivación de sentencias judiciales. Con este diálogo se busca abordar el derecho a no ser objeto de decisiones automatizadas y las implicaciones éticas y jurídicas sobre la automatización de sentencias judiciales. Algunas herramientas pueden reproducir o amplificar estereotipos discriminatorios, además de posibles violaciones a los derechos de privacidad y protección de datos personales, entre otros. Prying Open the Age-Gate: Crafting a Human Rights Statement Against Age Verification Mandates 16:30 - 17:30, Room 401 David Greene, Senior Staff Attorney and Civil Liberties Director Host institutions: Electronic Frontier Foundation (EFF), Open Net, Software Freedom Law Centre, EDRi The session will engage participants in considering the issues and seeding the drafting of a global human rights statement on online age verification mandates. After a background presentation on various global legal models to challenge such mandates (with the facilitators representing Asia, Africa, Europe, US), participants will be encouraged to submit written inputs (that will be read during the session) and contribute to a discussion. This will be the start of an ongoing effort that will extend beyond RightsCon with the goal of producing a human rights statement that will be shared and endorsed broadly. Day 4 (Thursday 27 February) Let's talk about the elephant in the room: transnational policing and human rights 10:15 - 11:15, Room 201B Veridiana Alimonti, Associate Director For Latin American Policy Host institutions: Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto This dialogue focuses on growing trends surrounding transnational policing, which pose new and evolving challenges to international human rights. The session will distill emergent themes, with focal points including expanding informal and formal transnational cooperation and data-sharing frameworks at regional and international levels, the evolving role of borders in the development of investigative methods, and the proliferation of new surveillance technologies including mercenary spyware and AI-driven systems. Queer over fear: cross-regional strategies and community resistance for LGBTQ+ activists fighting against digital authoritarianism 11:30 - 12:30, Room 101D Paige Collings, Senior Speech and Privacy Activist Host institutions: Access Now, Electronic Frontier Foundation (EFF), De|Center, Fight for the Future The rise of the international anti-gender movement has seen authorities pass anti-LGBTQ+ legislation that has made the stakes of survival even higher for sexual and gender minorities. This workshop will bring together LGBTQ+ activists from Africa, the Middle East, Eastern Europe, Central Asia and the United States to exchange ideas for advocacy and liberation from the policies, practices and directives deployed by states to restrict LGBTQ+ rights, as well as how these actions impact LGBTQ+ people—online and offline—particularly in regards to online organizing, protest and movement building.
Community members coordinated to pack Little Rock City Hall on Tuesday, where board members voted 5-3 to end the city's contract with ShotSpotter. Initially funded through a federal grant, Little Rock began its experiment with the “gunshot detection” sensors in 2018. ShotSpotter (now SoundThinking) has long been accused of steering federal grants toward local police departments in an effort to secure funding for the technology. Members of Congress are investigating this funding. EFF has long encouraged communities to follow the money that pays for police surveillance technology. Now, faced with a $188,000 contract renewal using city funds, Little Rock has joined the growing number of cities nationwide that have rejected, ended, or called into question their use of the invasive, error-prone technology. EFF has been a vocal critic of gunshot detection systems and extensively documented how ShotSpotter sensors risk capturing private conversations and enable discriminatory policing—ultimately calling on cities to stop using the technology. This call has been echoed by grassroots advocates coordinating through networks like the National Stop ShotSpotter Coalition. Community organizers have dedicated countless hours to popular education, canvassing neighborhoods, and conducting strategic research to debunk the company's spurious marketing claims. Through that effort, Little Rock has now joined the ranks of cities throughout the country to reject surveillance technologies like gunshot detection that harm marginalized communities and fail time and time again to deliver meaningful public safety. If you live in a city that's also considering dropping (or installing) ShotSpotter, share this news with your community and local officials!
You shouldn't need a permission slip to read a webpage–whether you do it with your own eyes, or use software to help. AI is a category of general-purpose tools with myriad beneficial uses. Requiring developers to license the materials needed to create this technology threatens the development of more innovative and inclusive AI models, as well as important uses of AI as a tool for expression and scientific research. Threats to Socially Valuable Research and Innovation Requiring researchers to license fair uses of AI training data could make socially valuable research based on machine learning (ML) and even text and data mining (TDM) prohibitively complicated and expensive, if not impossible. Researchers have relied on fair use to conduct TDM research for a decade, leading to important advancements in myriad fields. However, licensing the vast quantity of works that high-quality TDM research requires is frequently cost-prohibitive and practically infeasible. Fair use protects ML and TDM research for good reason. Without fair use, copyright would hinder important scientific advancements that benefit all of us. Empirical studies back this up: research using TDM methodologies are more common in countries that protect TDM research from copyright control; in countries that don’t, copyright restrictions stymie beneficial research. It’s easy to see why: it would be impossible to identify and negotiate with millions of different copyright owners to analyze, say, text from the internet. The stakes are high, because ML is critical to helping us interpret the world around us. It's being used by researchers to understand everything from space nebulae to the proteins in our bodies. When the task requires crunching a huge amount of data, such as the data generated by the world’s telescopes, ML helps rapidly sift through the information to identify features of potential interest to researchers. For example, scientists are using AlphaFold, a deep learning tool, to understand biological processes and develop drugs that target disease-causing malfunctions in those processes. The developers released an open-source version of AlphaFold, making it available to researchers around the world. Other developers have already iterated upon AlphaFold to build transformative new tools. Threats to Competition Requiring AI developers to get authorization from rightsholders before training models on copyrighted works would limit competition to companies that have their own trove of training data, or the means to strike a deal with such a company. This would result in all the usual harms of limited competition—higher costs, worse service, and heightened security risks—as well as reducing the variety of expression used to train such tools and the expression allowed to users seeking to express themselves with the aid of AI. As the Federal Trade Commission recently explained, if a handful of companies control AI training data, “they may be able to leverage their control to dampen or distort competition in generative AI markets” and “wield outsized influence over a significant swath of economic activity.” Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, widely considered to be the first lawsuit over AI training rights ever filed. Ross Intelligence sought to disrupt the legal research duopoly of Westlaw and LexisNexis by offering a new AI-based system. The startup attempted to license the right to train its model on Westlaw’s summaries of public domain judicial opinions and its method for organizing cases. Westlaw refused to grant the license and sued its tiny rival for copyright infringement. Ultimately, the lawsuit forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law. Similarly, shortly after Getty Images—a billion-dollar stock images company that owns hundreds of millions of images—filed a copyright lawsuit asking the court to order the “destruction” of Stable Diffusion over purported copyright violations in the training process, Getty introduced its own AI image generator trained on its own library of images. Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. To develop a “foundation model” that can be used to build generative AI systems like ChatGPT and Stable Diffusion, developers need to “train” the model on billions or even trillions of works, often copied from the open internet without permission from copyright holders. There’s no feasible way to identify all of those rightsholders—let alone execute deals with each of them. Even if these deals were possible, licensing that much content at the prices developers are currently paying would be prohibitively expensive for most would-be competitors. We should not assume that the same companies who built this world can fix the problems they helped create; if we want AI models that don’t replicate existing social and political biases, we need to make it possible for new players to build them. Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take. Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI. Threats to Free Expression Generative AI tools like text and image generators are powerful engines of expression. Creating content—particularly images and videos—is time intensive. It frequently requires tools and skills that many internet users lack. Generative AI significantly expedites content creation and reduces the need for artistic ability and expensive photographic or video technology. This facilitates the creation of art that simply would not have existed and allows people to express themselves in ways they couldn’t without AI. Some art forms historically practiced within the African American community—such as hip hop and collage—have a rich tradition of remixing to create new artworks that can be more than the sum of their parts. As professor and digital artist Nettrice Gaskins has explained, generative AI is a valuable tool for creating these kinds of art. Limiting the works that may be used to train AI would limit its utility as an artistic tool, and compound the harm that copyright law has already inflicted on historically Black art forms. Generative AI has the power to democratize speech and content creation, much like the internet has. Before the internet, a small number of large publishers controlled the channels of speech distribution, controlling which material reached audiences’ ears. The internet changed that by allowing anyone with a laptop and Wi-Fi connection to reach billions of people around the world. Generative AI magnifies those benefits by enabling ordinary internet users to tell stories and express opinions by allowing them to generate text in a matter of seconds and easily create graphics, images, animation, and videos that, just a few years ago, only the most sophisticated studios had the capability to produce. Legacy gatekeepers want to expand copyright so they can reverse this progress. Don’t let them: everyone deserves the right to use technology to express themselves, and AI is no exception. Threats to Fair Use In all of these situations, fair use—the ability to use copyrighted material without permission or payment in certain circumstances—often provides the best counter to restrictions imposed by rightsholders. But, as we explained in the first post in this series, fair use is under attack by the copyright creep. Publishers’ recent attempts to impose a new licensing regime for AI training rights—despite lacking any recognized legal right to control AI training—threatens to undermine the public’s fair use rights. By undermining fair use, the AI copyright creep makes all these other dangers more acute. Fair use is often what researchers and educators rely on to make their academic assessments and to gather data. Fair use allows competitors to build on existing work to offer better alternatives. And fair use lets anyone comment on, or criticize, copyrighted material. When gatekeepers make the argument against fair use and in favor of expansive copyright—in court, to lawmakers, and to the public—they are looking to cement their own power, and undermine ours. A Better Way Forward AI also threatens real harms that demand real solutions. Many creators and white-collar professionals increasingly believe that generative AI threatens their jobs. Many people also worry that it enables serious forms of abuse, such as AI-generated nonconsensual intimate imagery, including of children. Privacy concerns abound, as does consternation over misinformation and disinformation. And it’s already harming the environment. Expanding copyright will not mitigate these harms, and we shouldn’t forfeit free speech and innovation to chase snake oil “solutions” that won’t work. We need solutions that address the roots of these problems, like inadequate protections for labor rights and personal privacy. Targeted, issue-specific policies are far more likely to succeed in resolving the problems society faces. Take competition, for example. Proponents of copyright expansion argue that treating AI development like the fair use that it is would only enrich a handful of tech behemoths. But imposing onerous new copyright licensing requirements to train models would lock in the market advantages enjoyed by Big Tech and Big Media—the only companies that own large content libraries or can afford to license enough material to build a deep learning model—profiting entrenched incumbents at the public’s expense. What neither Big Tech nor Big Media will say is that stronger antitrust rules and enforcement would be a much better solution. What’s more, looking beyond copyright future-proofs the protections. Stronger environmental protections, comprehensive privacy laws, worker protections, and media literacy will create an ecosystem where we will have defenses against any new technology that might cause harm in those areas, not just generative AI. Expanding copyright, on the other hand, threatens socially beneficial uses of AI—for example, to conduct scientific research and generate new creative expression—without meaningfully addressing the harms. This post is part of our AI and Copyright series. For more information about the state of play in this evolving area, see our first post.
The launch of ChatGPT and other deep learning quickly led to a flurry of lawsuits against model developers. Legal theories vary, but most are rooted in copyright: plaintiffs argue that use of their works to train the models was infringement; developers counter that their training is fair use. Meanwhile developers are making as many licensing deals as possible to stave off future litigation, and it’s a sound bet that the existing litigation is an elaborate scramble for leverage in settlement negotiations. These cases can end one of three ways: rightsholders win, everybody settles, or developers win. As we’ve noted before, we think the developers have the better argument. But that’s not the only reason they should win these cases: while creators have a legitimate gripe, expanding copyright won’t protect jobs from automation. A win for rightsholders or even a settlement could also lead to significant harm, especially if it undermines fair use protections for research uses or artistic protections for creators. In this post and a follow-up, we’ll explain why. State of Play First, we need some context, so here’s the state of play: DMCA Claims Multiple courts have dismissed claims under Section 1202(b) of the Digital Millennium Copyright Act, stemming from allegations that developers removed or altered attribution information during the training process. In Raw Story Media v. OpenAI, Inc., the Southern District of New York dismissed these claims because the plaintiff had not “plausibly alleged” that training ChatGPT on their works had actually harmed them, and there was no “substantial risk” that ChatGPT would output their news articles. Because ChatGPT was trained on “massive amount of information from unnumerable sources on almost any given subject…the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.” Courts granted motions to dismiss similar DMCA claims in Andersen v. Stability AI, Ltd., , The Intercept Media, Inc. v. OpenAI, Inc., Kadrey v. Meta Platforms, Inc., and Tremblay v. OpenAI. Another such case, Doe v. GitHub, Inc. will soon be argued in the Ninth Circuit. Copyright Infringement Claims Rightsholders also assert ordinary copyright infringement, and the initial holdings are a mixed bag. In Kadrey v. Meta Platforms, Inc., for example, the court dismissed “nonsensical” claims that Meta’s LLaMA models are themselves infringing derivative works. In Andersen v. Stability AI Ltd., however, the court held that copyright claims based on the assumption that the plaintiff’s works were included in a training data set could go forward, where the use of plaintiffs’ names as prompts generated outputted images that were “similar to plaintiffs’ artistic works.” The court also held that the plaintiffs plausibly alleged that the model was designed to “promote infringement” for similar reasons. It's early in the case—the court was merely deciding if the plaintiffs had alleged enough to justify further proceedings—but it’s a dangerous precedent. Crucially, copyright protection extends only to the actual expression of the author—the underlying facts and ideas in a creative work are not themselves protected. That means that, while a model cannot output an identical or near-identical copy of a training image without running afoul of copyright, it is free to generate stylistically “similar” images. Training alone is insufficient to give rise to a claim of infringement, and the court impermissibly conflated permissible “similar” outputs with the copying of protectable expression. Fair Use In most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies. In one unusual case, however, the judge has flip-flopped, previously finding that the defendant’s use was fair and changing his mind. This case, Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence, Inc., concerns legal research technology. Thomson Reuters provides search tools to locate relevant legal opinions and prepares annotations describing the opinions’ holdings. Ross Intelligence hired lawyers to look at those annotations and rewrite them in their own words. Their output was used to train Ross’s search tool, ultimately providing users with relevant legal opinions based on their queries. Originally, the court got it right, holding that if the AI developer used copyrighted works only “as a step in the process of trying to develop a ‘wholly new,’ albeit competing, product,” that’s “transformative intermediate copying,” i.e. fair use. After reconsidering, however, the judge changed his mind in several respects, essentially disagreeing with prior case law regarding search engines. We think it’s unlikely that an appeals court would uphold this divergence from precedent. But if it did, it would present legal problems for AI developers—and anyone creating search tools. Copyright law favors the creation of new technology to learn and locate information, even when developing the tool required copying books and web pages in order to index them. Here, the search tool is providing links to legal opinions, not presenting users with any Thomson Reuters original material. The tool is concerned with noncopyrightable legal holdings and principles, not with supplanting any creative expression embodied in the annotations prepared by Thomson Reuters. Thomson Reuters has often pushed the limits of copyright in an attempt to profit off of the public’s need to access and refer to the law, for instance by claiming a proprietary interest in its page numbering of legal opinions. Unfortunately, the judge in this case enabled them to do so in a new way. We hope the appeals court reverses the decision. The Side Deals While all of this is going on, developers that can afford it—OpenAI, Google, and other tech behemoths—have inked multimillion-dollar licensing deals with Reddit, the Wall Street Journal, and myriad other corporate copyright owners. There’s suddenly a $2.5 billion licensing market for training data—even though the use of that data is almost certainly fair use. What’s Missing This litigation is getting plenty of attention. And it should because the stakes are high. Unfortunately, the real stakes are getting lost. These cases are not just about who will get the most financial benefits from generative AI. The outcomes will decide whether a small group of corporations that can afford big licensing fees will determine the future of AI for all of us. More on that tomorrow. This post is part of our AI and Copyright series. Check out our other post in this series.
Campaign Aims to Ensure that People Can Access Reproductive Rights Information Through Social Media SAN FRANCISCO—The Electronic Frontier Foundation (EFF) and the Repro Uncensored coalition on Wednesday launched the #StopCensoringAbortion campaign to ensure that people who need reproductive health and abortion information can find and share it. Censorship of this information by social media companies appears to be increasing, so the campaign will collect information to track such incidents. “This censorship is alarming, and we’re seeing it take place across popular social media platforms like Facebook, Instagram, and TikTok, where abortion-related content is often flagged or removed under vague ‘community guideline’ violations, despite the content being legal and factual,” said EFF Legislative Activist Rindala Alajaji. “This lack of transparency leaves organizations, influencers, and individuals in the dark, fueling a wider culture of online censorship that jeopardizes public access to vital healthcare information.” Initially, the campaign is collecting stories from people and organizations who have faced censorship on these platforms. This will help the public and the companies understand how often this is happening, who is affected, and with what consequences. EFF will use that information to demand that censorship stop and that the companies create greater transparency in their practices, which are often obscure and difficult to track. Tech companies must not silence critical conversations about reproductive rights. "We are not simply raising awareness—we are taking action to hold tech companies accountable for their role in censoring free speech around reproductive health. The stories we collect will be instrumental in presenting to the platforms the breadth of this problem, drawing a picture of its impact, and demanding more transparent policies,” Alajaji said. “If you or someone you know has had abortion-related content taken down or shadowbanned by a social media platform, your voice is crucial in this fight. By sharing your experience, you’ll be contributing to a larger movement to end censorship and demand that social media platforms stop restricting access to critical reproductive health information.” In addition to a portal for reporting incidents of online abortion censorship, the campaign’s landing page provides links to reporting and research on this censorship. Additionally, the page includes digital privacy and security guides for abortion activists, medical personnel, and patients. With reproductive rights under fire across the U.S. and around the world, access to accurate abortion information has never been more critical. Reproductive health and rights organizations have turned to online platforms to share essential, sometimes life-saving guidance and resources. Whether they provide the latest updates on abortion laws, where to find clinics, or education about abortion medication, online spaces have become a lifeline particularly for those in regions where reproductive freedoms are under siege. But a troubling trend is making it harder for people to access vital abortion information: Social media platforms are censoring or removing abortion-related content, often without a clear justification or policy basis. A recent example surfaced last month when Instagram posts by Aid Access, an online abortion services provider, were either blurred out or prevented from loading entirely. This sparked concerns in the press about how recent content moderation policy changes by Meta, the parent company of Instagram and Facebook, would affect availability of reproductive health information. For the campaign landing page: https://www.eff.org/pages/stop-censoring-abortion Contact: Rindala Alajaji Legislative Activist rin@eff.org
With reproductive rights under fire across the U.S. and globally, access to accurate abortion information has never been more critical—especially online. That’s why reproductive health and rights organizations have turned to online platforms to share essential, sometimes life-saving, guidance and resources. Whether it's how to access information about abortion medication, where to find clinics, or the latest updates on abortion laws, these online spaces have become a lifeline, particularly for those in regions where reproductive freedoms are under siege. But there's a troubling trend making it harder for people to access vital abortion information: social media platforms are increasingly censoring or removing abortion-related content—often without clear justification or policy basis. A recent example surfaced last month when a number of Instagram posts by Aid Access, an online abortion services provider, were either blurred out or unable to load entirely. This sparked concerns in the press about how recent content moderation policy changes by Meta, the parent company of Instagram and Facebook, would affect the availability of reproductive health information. The result? Crucial healthcare information gets erased, free expression is stifled, and people are left in the dark about their rights and healthcare options. This censorship is alarming, and we’re seeing it take place across popular social media platforms like Facebook, Instagram, and TikTok, where abortion-related content is often flagged or removed under vague "community guideline" violations, despite the content being perfectly legal and factual. This lack of transparency leaves organizations, influencers, and individuals in the dark, fueling a wider culture of online censorship that jeopardizes public access to vital healthcare information. #StopCensoringAbortion: An EFF and Repro Uncensored Collaboration In response to this growing issue, EFF has partnered with the Repro Uncensored coalition to call attention to instances of reproductive health and abortion content being removed or suppressed by social media platforms. We are collecting stories from individuals and organizations who have faced censorship on these platforms to expose the true scale of the issue. Our goal is to demand greater transparency in tech companies' moderation practices and ensure that their actions do not silence critical conversations about reproductive rights. We are not simply raising awareness—we are taking action to hold tech companies accountable for their role in censoring free speech around reproductive health. Share Your Story If you or someone you know has had abortion-related content taken down or shadowbanned by a social media platform, your voice is crucial in this fight. By sharing your experience, you’ll be contributing to a larger movement to end censorship and demand that social media platforms stop restricting access to critical reproductive health information. These stories will be instrumental in presenting to the platforms the breadth of this problem, drawing a picture of its impact, and demanding more transparent policies. If you’re able to spend five minutes reporting your experience, EFF and the rest of the Repro Uncensored coalition will do our best to help: https://www.reprouncensored.org/report-incident Even If You Haven’t Been Censored, You Can Still Help! Not everyone has experienced censorship, but that doesn’t mean you can’t contribute to the cause. You can still help by spreading the word. Share the #StopCensoringAbortion campaign on your social media platforms and visit our landing page for more resources and actions. Follow Repro Uncensored and EFF on Instagram, and sign up for email updates about this campaign. The more people who are involved, the stronger our collective voice will be. Together, we can amplify the message that information about reproductive health and rights should never be silenced—whether in the real world or online.
This post is part three in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here. EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights. In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and discuss how what happens in Europe can affect digital rights across the globe. Implementing a Privacy First Approach to Fighting Online Harms Infringements on privacy are commonplace across the world, and Europe is no exemption. Governments and regulators across the region are increasingly focused on a range of risks associated with the design and use of online platforms, such as addictive design, the effects of social media consumption on children’s and teenagers’ mental health, and dark patterns limiting consumer choices. Many of these issues share a common root: the excessive collection and processing of our most private and sensitive information by corporations for their own financial gain. One necessary approach to solving this pervasive problem is to reduce the amount of data that these entities can collect, analyze, and sell. The European General Data Protection Regulation (GDPR) is central to protecting users’ data protection rights in Europe, but the impact of the GDPR ultimately depends on how well it is enforced. Strengthening the enforcement of the GDPR in areas where data can be used to target, discriminate, and undermine fundamental rights is therefore a cornerstone in our work. Beyond the GDPR, we also bring our privacy first approach to fighting online harms to discussions on online safety and digital fairness. The Digital Services Act (DSA) makes some important steps to limit the use of some data categories to target users with ads, and bans targeteds ads for minors completely. This is the right approach, which we will build on as we contribute to the debate around the upcoming Digital Fairness Act. Age Verification Tools Are No Silver Bullet As in many other jurisdictions around the world, age verification has become a hotly debated topic in the EU, with governments across Europe seeking to introduce them. In the United Kingdom, legislation like the Online Safety Act (OSA) was introduced to make the UK “the safest place” in the world to be online. The OSA requires platforms to prevent individuals from encountering certain illegal content, which will likely mandate the use of intrusive scanning systems. Even worse, it empowers the British government, in certain situations, to demand that online platforms use government-approved software to scan for illegal content. And they are not alone in seeking to do so. Last year, France banned social media access for children under 15 without parental consent, and Norway also pledged to follow a similar ban. Children’s safety is important, but there is little evidence that online age verification tools can help achieve this goal. EFF has long fought against mandatory age verification laws, from the U.S. to Australia, and we’ll continue to stand up against these types of laws in Europe. Not just for the sake of free expression, but to protect the free flow of information that is essential to a free society. Challenging Creeping Surveillance Powers For years, we’ve observed a worrying tendency of technologies designed to protect people's privacy and data being re-framed as security concerns. And recent developments in Europe, like Germany’s rush to introduce biometric surveillance, signal a dangerous move towards expanding surveillance powers, justified by narratives framing complex digital policy issues as primarily security concerns. These approaches invite tradeoffs that risk undermining the privacy and free expression of individuals in the EU and beyond. Even though their access to data has never been broader, law enforcement authorities across Europe continue to peddle the tale of the world “going dark.” With EDRi, we criticized the EU high level group “going dark” and sent a joint letter warning against granting law enforcement unfettered capacities that may lead to mass surveillance and violate fundamental rights. We have also been involved in Pegasus spyware investigations, with EFF’s Executive Director Cindy Cohn participating in an expert hearing on the matter. The issue of spyware is pervasive and intersects with many components of EU law, such as the anti-spyware provisions contained within the EU Media Freedom Act. Intrusive surveillance has a global dimension, and our work has combined advocacy at the UN with the EU, for example, by urging the EU Parliament to reject the UN Cybercrime Treaty. Rather than increasing surveillance, countries across Europe must also make use of their prerogatives to ban biometric surveillance, ensuring that the use of this technology is not permitted in sensitive contexts such as Europe’s borders. Face recognition, for example, presents an inherent threat to individual privacy, free expression, information security, and social justice. In the UK, we’ve been working with national groups to ban government use of face recognition technology, which is currently administered by local police forces. Given the proliferation of state surveillance across Europe, government use of this technology must be banned. Protecting the Right to Secure and Private Communications EFF works closely on issues like encryption to defend the right to private communications in Europe. For years, EFF fought hard against an EU proposal that, if it became law, would have pressured online services to abandon end-to-end encryption. We joined together with EU allies and urged people to sign the “Don’t Scan Me” petition. We lobbied EU lawmakers and urged them to protect their constituents’ human right to have a private conversation—backed up by strong encryption. Our message broke through, and a key EU committee adopted a position that bars the mass scanning of messages and protects end-to-end encryption. It also bars mandatory age verification whereby users would have had to show ID to get online. As Member States are still debating their position on the proposal, this fight is not over yet. But we are encouraged by the recent European Court of Human Rights ruling which confirmed that undermining encryption violates fundamental rights to privacy. EFF will continue to advocate for this to governments, and the corporations providing our messaging services. As we’ve said many times, both in Europe and the U.S., there is no middle ground to content scanning and no “safe backdoor” if the internet is to remain free and private. Either all content is scanned and all actors—including authoritarian governments and rogue criminals—have access, or no one does. EFF will continue to advocate for the right to a private conversation, and hold the EU accountable to the international and European human rights protections that they are signatories to. Looking Forward EU legislation and international treaties should contain concrete human rights safeguards, robust data privacy standards, and sharp limits on intrusive surveillance powers, including in the context of global cooperation. Much work remains to be done. And we are ready for it. Late last year, we put forward comprehensive policy recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard. In the next—and final—post in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect fundamental rights.
Early in January 2025 it seemed like TikTok was on the verge of being banned by the U.S. government. In reaction to this imminent ban, several million people in the United States signed up for a different China-based social network known in the U.S. as RedNote, and in China as Xianghongshu (小红书/ 小紅書; which translates to Little Red Book). RedNote is an application and social network created in 2013 that currently has over 300 million users. Feature-wise, it is most comparable to Instagram and is primarily used for sharing pictures, videos, and shopping. The vast majority of its users live in China, are born after 1990, and are women. Even before the influx of new users in January, RedNote has historically had many users outside of China, primarily people from the Chinese diaspora who have friends and relatives on the network. RedNote is largely funded by two major Chinese tech corporations: Tencent and Alibaba. When millions of U.S. based users started flocking to the application, the traditional rounds of pearl clutching and concern trolling began. Many people raised the alarm about U.S. users entrusting their data with a Chinese company, and it is implied, the Chinese Communist Party. The reaction from U.S. users was an understandable, if unfortunate, bit of privacy nihilism. People responded that they, “didn’t care if someone in China was getting their data since US companies such as Meta and Google had already stolen their data anyway.” “What is the difference,” people argued, “between Meta having my data and someone in China? How does this affect me in any way?” Even if you don’t care about giving China your data, it is not safe to use any application that doesn’t use encryption by default. Last week, The Citizen Lab at The Munk School Of Global Affairs, University of Toronto, released a report authored by Mona Wang, Jeffrey Knockel, and Irene Poetranto which highlights three serious security issues in the RedNote app. The most concerning finding from Citizen Lab is a revelation that RedNote retrieves uploaded user content over plaintext http. This means that anyone else on your network, at your internet service provider, or organizations like the NSA, can see everything you look at and upload to RedNote. Moreover someone could intercept that request and replace it with their own media or even an exploit to install malware on your device. In light of this report the EFF Threat Lab decided to confirm the CItizen Lab findings and do some additional privacy investigation of RedNote. We used static analysis techniques for our investigation, including manual static analysis of decompiled source code, and automated scanners including MobSF and Exodus Privacy. We only analyzed Version 8.59.5 of RedNote for Android downloaded from the website APK Pure. EFF has independently confirmed the finding that Red Note retrieves posted content over plaintext http. Due to this lack of even basic transport layer encryption we don’t think this application is safe for anyone to use. Even if you don’t care about giving China your data, it is not safe to use any application that doesn’t use encryption by default. Citizen Lab researchers also found that users’ file contents are readable by network attackers. We were able to confirm that RedNote encrypts several sensitive files with static keys which are present in the app and the same across all installations of the app, meaning anyone who was able to retrieve those keys from a decompiled version of the app could decrypt these sensitive files for any user of the application. The Citizen Lab report also found a vulnerability where an attacker could identify the contents of any file readable by the application. This was out of scope for us to test but we find no reason to doubt this claim. The third major finding by Citizen Lab was that RedNote transmits device metadata in a way that can be eavesdropped on by network attackers, sometimes without encryption at all, and sometimes in a way vulnerable to a machine-in-the middle attack. We can confirm that RedNote does not validate HTTPS certificates properly. Testing this vulnerability was out of scope for EFF, but we find no reason to doubt this claim. Permissions and Trackers EFF performed further analysis of the permissions and trackers requested by RedNote. Our findings indicate two other potential privacy issues with the application. RedNote requests some very sensitive permissions, including location information, even when the app is not running in the foreground. This permission is not requested by other similar apps such as TikTok, Facebook, or Instagram. We also found, using an online scanner for tracking software called Exodus Privacy, that RedNote is not a platform which will protect its users from U.S.-based surveillance capitalism. In addition to sharing userdata with the Chinese companies Tencent and ByteDance, it also shares user data with Facebook and Google. Other Issues RedNote contains functionality to update its own code after it’s downloaded from the Google Play store using an open source library called APK Patch. This could be used to inject malicious code into the application after it has been downloaded without such code being revealed in automated scans meant to protect against malicious applications being uploaded to official stores, like Google Play. Recommendations Due to the lack of encryption we do not consider it safe for anyone to run this app. If you are going to use RedNote, we recommend doing so with the absolute minimum set of permissions necessary for the app to function (see our guides for iPhone and Android.) At least a part of this blame falls on Google. Android needs to stop allowing apps to make unencrypted requests at all. Due to the lack of encryption we do not consider it safe for anyone to run this app. RedNote should immediately take steps to encrypt all traffic from their application and remove the permission for background location information. Users should also keep in mind that RedNote is not a platform which values free speech. It’s a heavily censored application where topics such as political speech, drugs and addiction, and sexuality are more tightly controlled than similar social networks. Since it shares data with Facebook and Google ad networks, RedNote users should also keep in mind that it’s not a platform that protects you from U.S.-based surveillance capitalism. The willingness of users to so quickly move to RedNote also highlights the fact that people are hungry for platforms that aren't controlled by the same few American tech oligarchs. People will happily jump to another platform even if it presents new, unknown risks; or is controlled by foreign tech oligarchs such as Tencent and Alibaba. However, federal bans of such applications are not the correct answer. When bans are targeted at specific platforms such as TikTok, Deepseek, and RedNote rather than privacy-invasive practices such as sharing sensitive details with surveillance advertising platforms, users who cannot participate on the banned platform may still have their privacy violated when they flock to other platforms. The real solution to the potential privacy harms of apps like RedNote is to ensure (through technology, regulation, and law) that people’s sensitive information isn’t entered into the surveillance capitalist data stream in the first place. We need a federal, comprehensive, consumer-focused privacy law. Our government is failing to address the fundamental harms of privacy-invading social media. Implementing xenophobic, free-speech infringing policy is having the unintended consequence of driving folks to platforms with even more aggressive censorship. This outcome was foreseeable. Rather than a knee-jerk reaction banning the latest perceived threat, these issues could have been avoided by addressing privacy harms at the source and enacting strong consumer-protection laws. Figure 1. Permissions requested by RedNote Permission Description android.permission.ACCESS_BACKGROUND_LOCATION This app can access location at any time, even while the app is not in use. android.permission.ACCESS_COARSE_LOCATION This app can get your approximate location from location services while the app is in use. Location services for your device must be turned on for the app to get location. android.permission.ACCESS_FINE_LOCATION This app can get your precise location from location services while the app is in use. Location services for your device must be turned on for the app to get location. This may increase battery usage. android.permission.ACCESS_MEDIA_LOCATION Allows the app to read locations from your media collection. android.permission.ACCESS_NETWORK_STATE Allows the app to view information about network connections such as which networks exist and are connected. android.permission.ACCESS_WIFI_STATE Allows the app to view information about Wi-Fi networking, such as whether Wi-Fi is enabled and name of connected Wi-Fi devices. android.permission.AUTHENTICATE_ACCOUNTS Allows the app to use the account authenticator capabilities of the AccountManager, including creating accounts and getting and setting their passwords. android.permission.BLUETOOTH Allows the app to view the configuration of the Bluetooth on the phone, and to make and accept connections with paired devices. android.permission.BLUETOOTH_ADMIN Allows the app to configure the local Bluetooth phone, and to discover and pair with remote devices. android.permission.BLUETOOTH_CONNECT Allows the app to connect to paired Bluetooth devices android.permission.CAMERA This app can take pictures and record videos using the camera while the app is in use. android.permission.CHANGE_NETWORK_STATE Allows the app to change the state of network connectivity. android.permission.CHANGE_WIFI_STATE Allows the app to connect to and disconnect from Wi-Fi access points and to make changes to device configuration for Wi-Fi networks. android.permission.EXPAND_STATUS_BAR Allows the app to expand or collapse the status bar. android.permission.FLASHLIGHT Allows the app to control the flashlight. android.permission.FOREGROUND_SERVICE Allows the app to make use of foreground services. android.permission.FOREGROUND_SERVICE_DATA_SYNC Allows the app to make use of foreground services with the type dataSync android.permission.FOREGROUND_SERVICE_LOCATION Allows the app to make use of foreground services with the type location android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK Allows the app to make use of foreground services with the type mediaPlayback android.permission.FOREGROUND_SERVICE_MEDIA_PROJECTION Allows the app to make use of foreground services with the type mediaProjection android.permission.FOREGROUND_SERVICE_MICROPHONE Allows the app to make use of foreground services with the type microphone android.permission.GET_ACCOUNTS Allows the app to get the list of accounts known by the phone. This may include any accounts created by applications you have installed. android.permission.INTERNET Allows the app to create network sockets and use custom network protocols. The browser and other applications provide means to send data to the internet, so this permission is not required to send data to the internet. android.permission.MANAGE_ACCOUNTS Allows the app to perform operations like adding and removing accounts, and deleting their password. android.permission.MANAGE_MEDIA_PROJECTION Allows an application to manage media projection sessions. These sessions can provide applications the ability to capture display and audio contents. Should never be needed by normal apps. android.permission.MODIFY_AUDIO_SETTINGS Allows the app to modify global audio settings such as volume and which speaker is used for output. android.permission.POST_NOTIFICATIONS Allows the app to show notifications android.permission.READ_CALENDAR This app can read all calendar events stored on your phone and share or save your calendar data. android.permission.READ_CONTACTS Allows the app to read data about your contacts stored on your phone. Apps will also have access to the accounts on your phone that have created contacts. This may include accounts created by apps you have installed. This permission allows apps to save your contact data, and malicious apps may share contact data without your knowledge. android.permission.READ_EXTERNAL_STORAGE Allows the app to read the contents of your shared storage. android.permission.READ_MEDIA_AUDIO Allows the app to read audio files from your shared storage. android.permission.READ_MEDIA_IMAGES Allows the app to read image files from your shared storage. android.permission.READ_MEDIA_VIDEO Allows the app to read video files from your shared storage. android.permission.READ_PHONE_STATE Allows the app to access the phone features of the device. This permission allows the app to determine the phone number and device IDs, whether a call is active, and the remote number connected by a call. android.permission.READ_SYNC_SETTINGS Allows the app to read the sync settings for an account. For example, this can determine whether the People app is synced with an account. android.permission.RECEIVE_BOOT_COMPLETED Allows the app to have itself started as soon as the system has finished booting. This can make it take longer to start the phone and allow the app to slow down the overall phone by always running. android.permission.RECEIVE_USER_PRESENT Unknown permission from android reference android.permission.RECORD_AUDIO This app can record audio using the microphone while the app is in use. android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS Allows an app to ask for permission to ignore battery optimizations for that app. android.permission.REQUEST_INSTALL_PACKAGES Allows an application to request installation of packages. android.permission.SCHEDULE_EXACT_ALARM This app can schedule work to happen at a desired time in the future. This also means that the app can run when youu2019re not actively using the device. android.permission.SYSTEM_ALERT_WINDOW This app can appear on top of other apps or other parts of the screen. This may interfere with normal app usage and change the way that other apps appear. android.permission.USE_CREDENTIALS Allows the app to request authentication tokens. android.permission.VIBRATE Allows the app to control the vibrator. android.permission.WAKE_LOCK Allows the app to prevent the phone from going to sleep. android.permission.WRITE_CALENDAR This app can add, remove, or change calendar events on your phone. This app can send messages that may appear to come from calendar owners, or change events without notifying their owners. android.permission.WRITE_CLIPBOARD_SERVICE Unknown permission from android reference android.permission.WRITE_EXTERNAL_STORAGE Allows the app to write the contents of your shared storage. android.permission.WRITE_SETTINGS Allows the app to modify the system's settings data. Malicious apps may corrupt your system's configuration. android.permission.WRITE_SYNC_SETTINGS Allows an app to modify the sync settings for an account. For example, this can be used to enable sync of the People app with an account. cn.org.ifaa.permission.USE_IFAA_MANAGER Unknown permission from android reference com.android.launcher.permission.INSTALL_SHORTCUT Allows an application to add Homescreen shortcuts without user intervention. com.android.launcher.permission.READ_SETTINGS Unknown permission from android reference com.asus.msa.SupplementaryDID.ACCESS Unknown permission from android reference com.coloros.mcs.permission.RECIEVE_MCS_MESSAGE Unknown permission from android reference com.google.android.gms.permission.AD_ID Unknown permission from android reference com.hihonor.push.permission.READ_PUSH_NOTIFICATION_INFO Unknown permission from android reference com.hihonor.security.permission.ACCESS_THREAT_DETECTION Unknown permission from android reference com.huawei.android.launcher.permission.CHANGE_BADGE Unknown permission from android reference com.huawei.android.launcher.permission.READ_SETTINGS Unknown permission from android reference com.huawei.android.launcher.permission.WRITE_SETTINGS Unknown permission from android reference com.huawei.appmarket.service.commondata.permission.GET_COMMON_DATA Unknown permission from android reference com.huawei.meetime.CAAS_SHARE_SERVICE Unknown permission from android reference com.meizu.c2dm.permission.RECEIVE Unknown permission from android reference com.meizu.flyme.push.permission.RECEIVE Unknown permission from android reference com.miui.home.launcher.permission.INSTALL_WIDGET Unknown permission from android reference com.open.gallery.smart.Provider Unknown permission from android reference com.oplus.metis.factdata.permission.DATABASE Unknown permission from android reference com.oplus.permission.safe.AI_APP Unknown permission from android reference com.vivo.identifier.permission.OAID_STATE_DIALOG Unknown permission from android reference com.vivo.notification.permission.BADGE_ICON Unknown permission from android reference com.xiaomi.dist.permission.ACCESS_APP_HANDOFF Unknown permission from android reference com.xiaomi.dist.permission.ACCESS_APP_META Unknown permission from android reference com.xiaomi.security.permission.ACCESS_XSOF Unknown permission from android reference com.xingin.xhs.permission.C2D_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.JOPERATE_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.JPUSH_MESSAGE Unknown permission from android reference com.xingin.xhs.permission.MIPUSH_RECEIVE Unknown permission from android reference com.xingin.xhs.permission.PROCESS_PUSH_MSG Unknown permission from android reference com.xingin.xhs.permission.PUSH_PROVIDER Unknown permission from android reference com.xingin.xhs.push.permission.MESSAGE Unknown permission from android reference freemme.permission.msa Unknown permission from android reference freemme.permission.msa.SECURITY_ACCESS Unknown permission from android reference getui.permission.GetuiService.com.xingin.xhs Unknown permission from android reference ohos.permission.ACCESS_SEARCH_SERVICE Unknown permission from android reference oplus.permission.settings.LAUNCH_FOR_EXPORT Unknown permission from android reference
As calls by UK’s top leaders for the release of British-Egyptian blogger, coder, and activist Alaa Abd El-Fattah from prison in Cairo continue, Alaa’s mother, math professor Laila Soueif, grows weaker four months into a hunger strike she began in September to keep attention focused on her son and protest the lack of progress in obtaining his release. She has consumed only water, coffee, tea and rehydration salts for more than 135 days. She is 68 years old, and her condition is becoming dire. It's a shocking and unacceptable situation for Alaa’s family and his many supporters around the world. They continue to get the runaround from the British government about its efforts to get him released. The prime minister and foreign secretary, the key players in the drive to secure Alaa’s release, have expressed support for Alaa and dealt directly with Egypt’s highest authorities on his behalf. But Alaa’s family has received scant information about those discussions. What we do know is that Prime Minister Keir Starmer spoke directly to Egyptian President Abdel Fattah al-Sisi about Alaa during a phone call last summer and in December, but did not raise the issue when the two met at the G20 summit in November. Starmer told Soueif in a January 29 letter (he has so far declined to meet with her) that he is committed to pushing Egypt to release him. “I believe progress is possible, but it will take time,” he said. "I don't have time, Soueif told Agence France-Presse. Likewise, Foreign Secretary David Lammy said in January that he met with Egypt’s foreign minister in Saudi Arabia and has made securing Alaa’s release his number one priority. He spoke to his Egyptian counterpart, Badr Abdel Aty, again while in Cairo. Meanwhile, the government sent a strong message in its periodic review of Egypt before the UN Human Rights Council, saying freeing Alaa was its foremost recommendation and calling his detention “unacceptable.” Yet, there have been no signs that the Egyptian government will free Alaa. He remains in a maximum-security prison outside of Cairo. He has spent the better part of the last 10 years behind bars, unjustly charged for supporting online free speech and privacy for Egyptians and people across the Middle East and North Africa. The Egyptian government’s treatment of Alaa, a prominent global voice during the Arab Spring, is a travesty. “I don’t have time,” Soueif told Agence France-Presse. “We’ve been in this endless loop of imprisonment for almost 10 years,” Soueif told Middle East Eye in explaining why she went on a hunger strike. “I couldn't allow this to go on any further, and there was no reason to believe that if we waited a bit more, he'd come out.” Alaa should have been released on September 29, after serving his five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. Journalism and former foreign correspondent Peter Greste, who befriended Alaa 11 years ago when the two were locked up in the same prison—Greste on terrorism charges for his reporting—joined Soueif in a 21-day hunger strike to show his solidarity. “This injustice has gone on far too long,” he said. Others continue to press for Alaa’s release. This week a group of prominent Egyptian public figures called on President al-Sisi to release Alaa, citing among other things Soueif’s declining health. Allowing Alaa to get out of prison would not merely be a humanitarian response, but “a strategic decision that would foster a more conciliatory political climate,” they said. EFF and six international partner organizations in December called on Starmer to take immediate action to secure Alaa’s release. We told him that Alaa’s case is a litmus test of the UK’s commitment to human rights. Soueif’s future, and Alaa’s, rests in the UK government’s hands, and it must act now. Starmer needs to pick up the phone and call al-Sisi. If you’re based in the UK, here are some actions you can take to support the calls for Alaa’s release: Write to your MP (external link): https://freealaa.net/message-mp Join Laila Soueif outside the Foreign Office in London daily between 10-11am Share Alaa’s plight on social media using the hashtag #freealaa
As President Donald Trump issued an Executive Order in 2020 to retaliate against online services that fact-checked him, a team within the Department of Justice (DOJ) was finalizing a proposal to substantially weaken a key law that protects internet users’ speech. Documents released to EFF as part of a Freedom of Information Act (FOIA) suit reveal that the DOJ officials—a self-described “Tiger Team”—were caught off guard by Trump’s retaliatory effort, which was aimed at the same online social services they wanted to regulate further by amending 47 U.S.C. § 230 (Section 230). Section 230 protects users’ online speech by protecting the online intermediaries we all rely on to communicate on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say. The correspondence among DOJ officials shows that the group delayed unveiling the agency’s official plans to amend Section 230 in light of Trump’s executive order, which was challenged on First Amendment grounds and later rescinded by President Joe Biden. EFF represented the groups who challenged Trump’s Executive Order and filed two FOIA suits for records about the administration’s implementation of the order. In the most recent FOIA case, the DOJ has been slowly releasing records detailing its work to propose amendments to Section 230, which predated Trump’s Executive Order. The DOJ released the text of its proposed amendments to Section 230 in September 2020, and the proposal would have substantially narrowed the law’s protections. For example, the DOJ’s proposal would have allowed federal civil suits and state and federal criminal prosecutions against online services if they learned that users’ content broke the law. It also would have established notice-and-takedown liability for user-generated content that was deemed to be illegal. Together, these provisions would likely result in online services screening and removing a host of legal content, based on a fear that any questionable material might trigger liability later. The DOJ’s proposal had a distinct emphasis on imposing liability on services should they have hosted illegal content posted by their users. That focus was likely the result of the team DOJ assembled to work on the proposal, which included officials from the agency’s cybercrime division and the FBI. The documents also show that DOJ officials met with attorneys who brought lawsuits against online services to get their perspective on Section 230. This is not surprising, as the DOJ had been meeting with multiple groups throughout 2020 while it prepared a report about Section 230. EFF’s FOIA suit is ongoing, as the DOJ has said that it still has thousands of potential pages to review and possibly release. Although these documents reflect DOJ’s activity from Trump’s first term, they are increasingly relevant as the administration appoints officials who have previously threatened online intermediaries for exercising their own First Amendment rights. EFF will continue to publish all documents released in this FOIA suit and push back on attempts to undermine internet users’ rights to speak online.
Google continues to show us why it chose to abandon its old motto of “Don’t Be Evil,” as it becomes more and more enmeshed with the military-industrial complex. Most recently, Google has removed four key points from its AI principles. Specifically, it previously read that the company would not pursue AI applications involving (1) weapons, (2) surveillance, (3) technologies that “cause or are likely to cause overall harm,” and (4) technologies whose purpose contravenes widely accepted principles of international law and human rights. Those principles are gone now. In its place, the company has written that “democracies” should lead in AI development and companies should work together with governments “to create AI that protects people, promotes global growth, and supports national security.” This could mean that the provider of the world’s largest search engine–the tool most people use to uncover the best apple pie recipes and to find out what time their favorite coffee shop closes–could be in the business of creating AI-based weapons systems and leveraging its considerable computing power for surveillance. This troubling decision to potentially profit from high-tech warfare, which could have serious consequences for real lives and real people comes after criticism from EFF, human rights activists, and other international groups. Despite its pledges and vocal commitment to human rights, Google has faced criticism for its involvement in Project Nimbus, which provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. EFF said in 2024, “When a company makes a promise, the public should be able to rely on it.” Rather than fully living up to its previous human rights commitments, it seems Google has shifted its priorities. Google is a company valued at $2.343 trillion that has global infrastructure and a massive legal department and appears to be leaning into the current anti-humanitarian moment. The fifth largest company in the world seems to have chosen to make the few extra bucks (relative to the company’s earnings and net worth) that will come from mass surveillance tools and AI-enhanced weapons systems. And of course we can tell why. With government money flying out the door toward defense contractors, surveillance technology companies, and other national security and policing related vendors, the legacy companies who swallow up all of that data don’t want to miss out on the feeding frenzy. With $1 billion contracts on the table even for smaller companies promising AI-enhanced tech, it looks like Google is willing to throw its lot in with the herd. In addition to Google and Amazon’s involvement with Project Nimbus, which involved both cloud storage for the large amounts of data collected from mass surveillance and analysis of that data, there are many other scenarios and products on the market that could raise concerns. AI could be used to power autonomous weapons systems which decide when and if to pull the trigger or drop a bomb. Targeting software can mean physically aiming weapons at people identified by geolocation or by other types of machine learning like face recognition or other biometric technology. AI could also be used to sift through massive amounts of intelligence, including intercepted communications or publicly available information from social media and the internet in order to assemble lists of people to be targeted by militaries. Whether autonomous AI-based weapons systems and surveillance are controlled by totalitarian states or states that meet Google’s definition of “democracy”, is of little comfort to the people who could be targeted, spied on, or killed in error by AI technology which is prone to mistakes. AI cannot be accountable for its actions. If we, the public, are able to navigate the corporate, government, and national security secrecy to learn of these flaws, companies will fall on a playbook we’ve seen before: tinkering with the algorithms and declaring the problem solved. We urge Google, and all of the companies that will follow in its wake, to reverse course. In the meantime, users will have to decide who deserves their business. As the company’s most successful product, its search engine, is faltering, that decision gets easier and easier.
Across the United States, Immigration and Customs Enforcement (ICE) has already begun increasing enforcement operations, including highly publicized raids. As immigrant communities, families, allies, and activists think about what can be done to shift policy and protect people, one thing is certain: similar to filming the police as they operate, you have the right to film ICE, as long as you are not obstructing official duties. Filming ICE agents making an arrest or amassing in your town helps promote transparency and accountability for a system that often relies on intimidation and secrecy and obscures abuse and law-breaking. While it is crucial for people to help aid in transparency and accountability, there are considerations and precautions you should take. For an in-depth guide by organizations on the frontlines of informing people who wish to record ICE’s interactions with the public, review these handy resources from the hard-working folks at WITNESS and NYCLU. At EFF, here are our general guidelines when it comes to filming law enforcement, including ICE: What to Know When Recording Law Enforcement You have the right to record law enforcement officers exercising their official duties in public. Stay calm and courteous. Do not interfere with law enforcement. If you are a bystander, stand at a safe distance from the scene that you are recording. You may take photos or record video and/or audio. Law enforcement cannot order you to move because you are recording, but they may order you to move for public safety reasons even if you are recording. Law enforcement may not search your cell phone or other device without a warrant based on probable cause from a judge, even if you are under arrest. Thus, you may refuse a request from an officer to review or delete what you recorded. You also may refuse to unlock your phone or provide your passcode. Despite reasonably exercising your First Amendment rights, law enforcement officers may illegally retaliate against you in a number of ways including with arrest, destruction of your device, and bodily harm. They may also try to retaliate by harming the person being arrested. We urge you to remain alert and mindful about this possibility. Consider the sensitive nature of recording in the context of an ICE arrest. The person being arrested or their loved ones may be concerned about exposing their immigration status, so think about obtaining consent or blurring out faces in any version you publish to focus on ICE’s conduct (while still retaining the original video). Your First Amendment Right to Record Law Enforcement Officers Exercising Their Official Duties in Public You have a First Amendment right to record law enforcement, which federal courts and the Justice Department have recognized and affirmed. Although the Supreme Court has not squarely ruled on the issue, there is a long line of First Amendment case law from the high court that supports the right to record law enforcement. And federal appellate courts in the First, Third, Fourth, Fifth, Seventh, Eighth, Ninth, Tenth, and Eleventh Circuits have directly upheld this right. EFF has advocated for this right in many amicus briefs. Federal appellate courts typically frame the right to record law enforcement as the right to record officers exercising their official duties in public. This right extends to private places, too, where the recorder has a legal right to be, such as in their own home. However, if the law enforcement officer is off-duty or is in a private space that you don’t have a right to be in, your right to record the officer may be limited. Special Considerations for Recording Audio The right to record law enforcement unequivocally includes the right to take pictures and record video. There is an added legal wrinkle when recording audio—whether with or without video. Some law enforcement officers have argued that recording audio without their consent violates wiretap laws. Courts have generally rejected this argument. The Seventh Circuit, for example, held that the Illinois wiretap statute violated the First Amendment as applied to audio recording on-duty police. There are two kinds of wiretaps laws: those that require “all parties” to a conversation to consent to audio recording (12 states), and those that only require “one party” to consent (38 states, the District of Columbia, and the federal statute). Thus, if you’re in a one-party consent state, and you’re involved in an incident with law enforcement (that is, you’re a party to the conversation) and you want to record audio of that interaction, you are the one party consenting to the recording and you don’t also need the law enforcement officer’s consent. If you’re in an all-party consent state, and your cell phone or recording device is in plain view, your open audio recording puts the officer on notice and thus their consent might be implied. Additionally, wiretap laws in both all-party consent states and one-party consent states typically only prohibit audio recording of private conversations—that is, when the parties to the conversation have a reasonable expectation of privacy. Law enforcement officers exercising their official duties, particularly in public, do not have a reasonable expectation of privacy. Neither do civilians in public places who speak to law enforcement in a manner audible to passersby. Thus, if you’re a bystander, you may legally audio record an officer’s interaction with another person, regardless of whether you’re in a state with an all-party or one-party consent wiretap statute. However, you should take into consideration that ICE arrests may expose the immigration status of the person being arrested or their loved ones. As WITNESS puts it: “[I]t’s important to keep in mind the privacy and dignity of the person being targeted by law enforcement. They may not want to be recorded or have the video shared publicly. When possible, make eye contact or communicate with the person being detained to let them know that you are there to observe and document the cops’ behavior. Always respect their wishes if they ask you to stop filming.” You may also want to consider blurring faces to focus on ICE’s conduct if you publish the video online (while still retaining the original version) Moreover, whether you may secretly record law enforcement (whether with photos, video or audio) is important to understand, given that officers may retaliate against individuals who openly record them. At least one federal appellate court, the First Circuit, has affirmed the First Amendment right to secretly audio record law enforcement performing their official duties in public. On the other hand, the Ninth Circuit recently upheld Oregon’s law that generally bans secret recordings of in-person conversations without all participants’ consent, and only allows recordings of conversations where police officers are participants if “[t]he recording is made openly and in plain view of the participants in the conversation.” Unless you are within the jurisdiction of the First Circuit (Maine, Massachusetts, New Hampshire, Puerto Rico and Rhode Island), it’s probably best to have your recording device in plain view of police officers. Do Not Interfere With Law Enforcement While the weight of legal authority provides that individuals have a First Amendment right to record law enforcement, courts have also stated one important caveat: you may not interfere with officers doing their jobs. The Seventh Circuit, for example, said, “Nothing we have said here immunizes behavior that obstructs or interferes with effective law enforcement or the protection of public safety.” The court further stated, “While an officer surely cannot issue a ‘move on’ order to a person because he is recording, the police may order bystanders to disperse for reasons related to public safety and order and other legitimate law enforcement needs.” Transparency is Vital While a large number of deportations is a constant in the U.S. regardless of who is president or which party is in power, the current administration appears to be intentionally making ICE visible in cities and carrying out flashy raids to sow fear within immigrant communities. Specifically, there are concerns that this administration is targeting people already under government supervision while awaiting their day in court. Bearing witness and documenting the presence and actions of ICE in your communities and neighborhoods is important. You have rights, and one of them is your First Amendment-protected right to film law enforcement officers, including ICE agents. Just because you have the right, however, does not mean law enforcement will always acknowledge and uphold your right in that moment. Be safe and be alert. If you have reason to think your devices might be seized or you may run the risk of putting yourself under surveillance, make sure to check out our Surveillance Self-Defense guides and our field guide to identifying and understanding the surveillance tools law enforcement may employ.
For years now, there has been some concern about the coziness between technology companies and the government. Whether a company complies with casual government requests for data, requires a warrant, or even fights overly-broad warrants has been a canary in the digital coal mine during an era where companies may know more about you than your best friends and families. For example, in 2022, law enforcement served a warrant to Facebook for the messages of a 17-year-old girl—messages that were later used as evidence in a criminal trial that the teenager had received an abortion. In 2023, after a four year wait since announcing its plans, Facebook encrypted its messaging system so that the company no longer had access to the content of those communications. The privacy of messages and the relationship between companies and the government have real-world consequences. That is why a new era of symbiosis between big tech companies and the U.S. government bodes poorly for both, our hopes that companies will be critical of requests for data, and any chance of tech regulations and consumer privacy legislation. But, this chumminess should also come with a heightened awareness for users: as companies and the government become more entwined through CEO friendships, bureaucratic entanglements, and ideological harmony, we should all be asking what online data is private and what is sitting on a company's servers and accessible to corporate leadership at the drop of hat. Over many years, EFF has been pushing for users to switch to platforms that understand the value of encrypting data. We have also been pushing platforms to make end-to-end encryption for online communications and for your stored sensitive data the norm. This type of encryption helps ensure that a conversation is private between you and the recipient, and not accessible to the platform that runs it or any other third-parties. Thanks to the combined efforts of our organization and dozens of other concerned groups, tech users, and public officials, we now have a lot of options for applications and platforms that take our privacy more seriously than in previous generations. But, in light of recent political developments it’s time for a refresher course: which platforms and applications have encrypted DMs, and which have access to your sensitive personal communications. The existence of what a platform calls “end-to-end encryption” is not foolproof. It may be poorly implemented, lack widespread adoption to attract the attention of security researchers, lack the funding to pay for security audits, or use a less well-established encryption protocol that doesn’t have much public scrutiny. It also can’t protect against other sorts of threats, like someone gaining access to your device or screenshotting a conversation. Being caught using certain apps can itself be dangerous in some cases. And it takes more than just a basic implementation to resist a targeted active attack, as opposed to later collection. But it’s still the best way we currently have to ensure our digital conversations are as private as possible. And more than anything, it needs to be something you and the people you speak with will actually use, so features can be an important consideration. No platform provides a perfect mix of security features for everyone, but understanding the options can help you start figuring out the right choices. When it comes to popular social media platforms, Facebook Messenger uses end-to-end encryption on private chats by default (this feature is optional in group chats on Messenger, and on some of the company’s other offerings, like Instagram). Other companies, like X, offer optional end-to-end encryption, with caveats, such as only being available to users who pay for verification. Then there’s platforms like Snapchat, which have given talks about their end-to-end encryption in the past, but don’t provide further details about its current implementations. Other platforms, like Bluesky, Mastodon, and TikTok, do not offer end-to-end encryption in direct messages, which means those conversations could be accessible to the companies that run the platforms or made available to law enforcement upon request. As for apps more specifically designed around chat, there are more examples. Signal offers end-to-end encryption for text messages and voice calls by default with no extra setup on your part, and collects less metadata than other options. Metadata can reveal information such as who you are talking with and when, or your location, which in some cases may be all law enforcement needs. WhatsApp is also end-to-end encrypted. Apple’s Messages app is end-to-end encrypted, but only if everyone in the chat has an iPhone (blue bubbles). The same goes for Google Messages, which is end-to-end encrypted as long as everyone has set it up properly, which sometimes happens automatically. Of course, we have a number of other communication tools at our disposal, like Zoom, Slack, Discord, Telegram, and more. Here, things continue to get complicated, with end-to-end encryption being an optional feature sometimes, like on Zoom or Telegram; available only for specific types of communication, like video and voice calls on Discord but not text conversations; or not being available at all, like with Slack. Many other options exist with varying feature-sets, so it’s always worth doing some research if you find something new. This does not mean you need to avoid these tools entirely, but knowing that your chats may be available to the platform, law enforcement, or an administrator is an important thing to consider when choosing what to say and when to say it. And for high-risk users, the story becomes even more complicated. Even on an encrypted platform, users can be subject to targeted machine-in-the middle attacks (also known as man-in-the middle attacks) unless everyone verifies each others’ keys. Most encrypted apps will let you do this manually, but some have started to implement automatic key verification, which is a security win. And encryption doesn’t matter if message backups are uploaded to the company’s servers unencrypted, so it’s important to either choose to not backup messages, or carefully set up encrypted backups on platforms that allow it. This is all before getting into the intricacies of how apps handle deleted and disappearing messages, or whether there’s a risk of being found with an encrypted app in the first place. CEOs are not the beginning and the end of a company’s culture and concerns—but we should take their commitments and signaled priorities seriously. At a time when some companies may be cozying up to the parts of government with the power to surveil and marginalize, it might be an important choice to move our data and sensitive communications to different platforms. After all, even if you are not at specific risk of being targeted by the government, your removed participation on a platform sends a clear political message about what you value in a company.
Lawsuit Argues Defendants Violated the Privacy Act by Disclosing Sensitive Data NEW YORK—EFF and a coalition of privacy defenders led by Lex Lumina filed a lawsuit today asking a federal court to stop the U.S. Office of Personnel Management (OPM) from disclosing millions of Americans’ private, sensitive information to Elon Musk and his “Department of Government Efficiency” (DOGE). The complaint on behalf of two labor unions and individual current and former government workers across the country, filed in the U.S. District Court for the Southern District of New York, also asks that any data disclosed by OPM to DOGE so far be deleted. The complaint by EFF, Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm argues that OPM and OPM Acting Director Charles Ezell illegally disclosed personnel records to Musk’s DOGE in violation of the federal Privacy Act of 1974. Last week, a federal judge temporarily blocked DOGE from accessing a critical Treasury payment system under a similar lawsuit. This lawsuit’s plaintiffs are the American Federation of Government Employees AFL-CIO; the Association of Administrative Law Judges, International Federation of Professional and Technical Engineers Judicial Council 1 AFL-CIO; Vanessa Barrow, an employee of the Brooklyn Veterans Affairs Medical Center; George Jones, President of AFGE Local 2094 and a former employee of VA New York Harbor Healthcare; Deborah Toussant, a former federal employee; and Does 1-100, representing additional current or former federal workers or contractors. As the federal government is the nation’s largest employer, the records held by OPM represent one of the largest collections of sensitive personal data in the country. In addition to personally identifiable information such as names, social security numbers, and demographic data, these records include work information like salaries and union activities; personal health records and information regarding life insurance and health benefits; financial information like death benefit designations and savings programs; and nondisclosure agreements; and information concerning family members and other third parties referenced in background checks and health records. OPM holds these records for tens of millions Americans, including current and former federal workers and those who have applied for federal jobs. OPM has a history of privacy violations—an OPM breach in 2015 exposed the personal information of 22.1 million people—and its recent actions make its systems less secure. With few exceptions, the Privacy Act limits the disclosure of federally maintained sensitive records on individuals without the consent of the individuals whose data is being shared. It protects all Americans from harms caused by government stockpiling of our personal data. This law was enacted in 1974, the last time Congress acted to limit the data collection and surveillance powers of an out-of-control President. “The Privacy Act makes it unlawful for OPM Defendants to hand over access to OPM’s millions of personnel records to DOGE Defendants, who lack a lawful and legitimate need for such access,” the complaint says. “No exception to the Privacy Act covers DOGE Defendants’ access to records held by OPM. OPM Defendants’ action granting DOGE Defendants full, continuing, and ongoing access to OPM’s systems and files for an unspecified period means that tens of millions of federal-government employees, retirees, contractors, job applicants, and impacted family members and other third parties have no assurance that their information will receive the protection that federal law affords.” For more than 30 years, EFF has been a fierce advocate for digital privacy rights. In that time, EFF has been at the forefront of exposing government surveillance and invasions of privacy—such as forcing the release of hundreds of pages of documents about domestic surveillance under the Patriot Act—and enforcing existing privacy laws to protect ordinary Americans—such as in its ongoing lawsuit against Sacramento's public utility company for sharing customer data with police. For the complaint: https://www.eff.org/document/afge-v-opm-complaint For more about the litigation: https://www.eff.org/deeplinks/2025/02/eff-sues-doge-and-office-personnel-management-halt-ransacking-federal-data Contacts: Electronic Frontier Foundation: press@eff.org Lex Lumina LLP: Managing Partner Rhett Millsaps, rhett@lex-lumina.com
Congress has begun debating the TAKE IT DOWN Act (S. 146), a bill that seeks to speed up the removal of a troubling type of online content: non-consensual intimate imagery, or NCII. In recent years, concerns have also grown about the use of digital tools to alter or create such images, sometimes called deepfakes. While protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. As currently drafted, the TAKE IT DOWN Act mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without addressing the problem it claims to solve. The Bill Will Lead To Overreach and Censorship TAKE IT DOWN mandates that websites and other online services remove flagged content within 48 hours and requires “reasonable efforts” to identify and remove known copies. Although this provision is designed to allow NCII victims to remove this harmful content, its broad definitions and lack of safeguards will likely lead to people misusing the notice-and-takedown system to remove lawful speech. The takedown provision applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the bill. The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The legislation’s tight time frame requires that apps and websites remove content within 48 hours, meaning that online service providers, particularly smaller ones, will have to comply so quickly to avoid legal risk that they won’t be able to verify claims. Instead, automated filters will be used to catch duplicates, but these systems are infamous for flagging legal content, from fair-use commentary to news reporting. TAKE IT DOWN creates a far broader internet censorship regime than the Digital Millennium Copyright Act (DMCA), which has been widely abused to censor legitimate speech. But at least the DMCA has an anti-abuse provision and protects services from copyright claims should they comply. TAKE IT DOWN contains none of those minimal speech protections and essentially greenlights misuse of its takedown regime. TAKE IT DOWN Threatens Encrypted Services The online services that do the best job of protecting user privacy could also be under threat from Take It Down. While the bill exempts email services, it does not provide clear exemptions for private messaging apps, cloud storage, and other end-to-end encrypted (E2EE) services. Services that use end-to-end encryption, by design, are not able to access or view unencrypted user content. How could such services comply with the takedown requests mandated in this bill? Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces. In fact, victims of NCII often rely on encryption for safety—to communicate with advocates they trust, store evidence, or escape abusive situations. The bill’s failure to protect encrypted communications could harm the very people it claims to help. Victims Of NCII Have Legal Options Under Existing Law An array of criminal and civil laws already exist to address NCII. In addition to 48 states that have specific laws criminalizing the distribution of non-consensual pornography, there are defamation, harassment, and extortion statutes that can all be wielded against people abusing NCII. Since 2022, NCII victims have also been able to bring federal civil lawsuits against those who spread this harmful content. As we explained in 2018: If a deepfake is used for criminal purposes, then criminal laws will apply. If a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. For any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations. In many cases, civil claims could also be brought against those distributing the images under causes of action like False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes. A false light plaintiff (such as a person harmed by NCII) must prove that a defendant (such as a person who uploaded NCII) published something that gives a false or misleading impression of the plaintiff in such a way to damage the plaintiff’s reputation or cause them great offense Congress should focus on enforcing and improving these existing protections, rather than opting for a broad takedown regime that is bound to be abused. Private platforms can play a part as well, improving reporting and evidence collection systems.
EFF and a coalition of privacy defenders have filed a lawsuit today asking a federal court to block Elon Musk’s Department of Government Efficiency (DOGE) from accessing the private information of millions of Americans that is stored by the Office of Personnel Management (OPM), and to delete any data that has been collected or removed from databases thus far. The lawsuit also names OPM, and asks the court to block OPM from sharing further data with DOGE. The Plaintiffs who have stepped forward to bring this lawsuit include individual federal employees as well as multiple employee unions, including the American Federation of Government Employees and the Association of Administrative Law Judges. This brazen ransacking of Americans’ sensitive data is unheard of in scale. With our co-counsel Lex Lumina, State Democracy Defenders Fund, and the Chandra Law Firm, we represent current and former federal employees whose privacy has been violated. We are asking the court for a temporary restraining order to immediately cease this dangerous and illegal intrusion. This massive trove of information includes private demographic data and work histories of essentially all current and former federal employees and contractors as well as federal job applicants. Access is restricted by the federal Privacy Act of 1974. Last week, a federal judge temporarily blocked DOGE from accessing a critical Treasury payment system under a similar lawsuit. The mishandling of this information could lead to such significant and varied abuses that they are impossible to detail. What’s in OPM’s Databases? The data housed by OPM is extraordinarily sensitive for several reasons. The federal government is the nation’s largest employer, and OPM’s records are one of the largest, if not the largest, collection of employee data in the country. In addition to personally identifiable information such as names, social security numbers, and demographics, it includes work experience, union activities, salaries, performance, and demotions; health information like life insurance and health benefits; financial information like death benefit designations and savings programs; and classified information nondisclosure agreements. It holds records for millions of federal workers and millions more Americans who have applied for federal jobs. The mishandling of this information could lead to such significant and varied abuses that they are impossible to detail. On its own, DOGE’s unchecked access puts the safety of all federal employees at risk of everything from privacy violations to political pressure to blackmail to targeted attacks. Last year, Elon Musk publicly disclosed the names of specific government employees whose jobs he claimed he would cut before he had access to the system. He has also targeted at least one former employee of Twitter. With unrestricted access to OPM data, and with his ownership of the social media platform X, federal employees are at serious risk. And that’s just the danger from disclosure of the data on individuals. OPM’s records could give an overview of various functions of entire government agencies and branches. Regardless of intention, the law makes it clear that this data is carefully protected and cannot be shared indiscriminately. In late January, OPM reportedly sent about two million federal employees its "Fork in the Road" form email introducing a “deferred resignation” program. This is a visible way in which the data could be used; OPMs databases contain the email addresses for every federal employee. How the Privacy Act Protects Americans’ Data Under the Privacy Act of 1974, disclosure of government records about individuals generally requires the written consent of the individual whose data is being shared, with few exceptions. Congress passed the Privacy Act in response to a crisis of confidence in the government as a result of scandals including Watergate and the FBI’s Counter Intelligence Program (COINTELPRO). The Privacy Act, like the Foreign Intelligence Surveillance Act of 1978, was created at a time when the government was compiling massive databases of records on ordinary citizens and had minimal restrictions on sharing them, often with erroneous information and in some cases for retaliatory purposes. These protections were created the last time Congress rose to the occasion of limiting the surveillance powers of an out-of-control President. Congress was also concerned with the potential for abuse presented by the increasing use of electronic records and the use of identifiers such as social security numbers, both of which made it easier to combine individual records housed by various agencies and to share that information. In addition to protecting our private data from disclosure to others, the Privacy Act, along with the Freedom of Information Act, also allows us to find out what information is stored about us by the government. The Privacy Act includes a private right of action, giving ordinary people the right to decide for themselves whether to bring a lawsuit to enforce their statutory privacy rights, rather than relying on government agencies or officials. It is no coincidence that these protections were created the last time Congress rose to the occasion of limiting the surveillance powers of an out-of-control President. That was fifty years ago; the potential impact of leaking this government information, representing the private lives of millions, is now even more serious. DOGE and OPM are violating Americans’ most fundamental privacy rights at an almost unheard-of scale. OPM’s Data Has Been Under Assault Before Ten years ago, OPM announced that it had been the target of two data breaches. Over twenty-million security clearance records—information on anyone who had undergone a federal employment background check, including their relatives and references—were reportedly stolen by state-sponsored attackers working for the Chinese government. At the time, it was considered one of the most potentially damaging breaches in government history. DOGE employees likely have access to significantly more data than this. Just as an example, the OPM databases also include personal information for anyone who applied to a federal job through USAJobs.gov—24.5 million people last year. Make no mistake: this is, in many ways, a worse breach than what occurred in 2014. DOGE has access to ten more years of data; it likely includes what was breached before, as well as significantly more sensitive data. (This is not to mention that while DOGE has access to these databases, they reportedly have the ability to not only export records, but to add them, modify them, or delete them.) Every day that DOGE maintains its current level of access, more risks mount. EFF Fights for Privacy EFF has fought to protect privacy for nearly thirty-five years at the local, state, and federal level, as well as around the world. We have been at the forefront of exposing government surveillance and invasions of privacy: In 2006, we sued AT&T on behalf of its customers for violating privacy law by collaborating with the NSA in the massive, illegal program to wiretap and data-mine Americans’ communications. We also filed suit against the NSA in 2008; both cases arose from surveillance that the U.S. government initiated in the aftermath of 9/11. In addition to leading or serving as co-counsel in lawsuits, such as in our ongoing case against Sacramento's public utility company for sharing customer data with police, EFF has filed amicus briefs in hundreds of cases to protect privacy, free speech, and creativity. EFF’s fight for privacy spans advocacy and technology, as well: Our free browser extension, Privacy Badger, protects millions of individuals from invasive spying by third-party advertisers. Another browser extension, HTTPS Everywhere, alongside Certbot, a tool that makes it easy to install free HTTPS certificates for websites, helped secure the web, which has now largely switched from non-secure HTTP to the more secure HTTPS protocol. EFF is glad to join the brigade of lawsuits to protect this critical information. EFF also fights to improve privacy protections by advancing strong laws, such as the California Electronic Communications Privacy Act (CalECPA) in 2015, which requires state law enforcement to get a warrant before they can access electronic information about who we are, where we go, who we know, and what we do. We also have a long, successful history of pushing companies, as well, to protect user privacy, from Apple to Amazon. What’s Next The question is not “what happens if this data falls into the wrong hands.” The data has already fallen into the wrong hands, according to the law, and it must be safeguarded immediately. Violations of Americans’ privacy have played out across multiple agencies, without oversight or safeguards, and EFF is glad to join the brigade of lawsuits to protect this critical information. Our case is fairly simple: OPM’s data is extraordinarily sensitive, OPM gave it to DOGE, and this violates the Privacy Act. We are asking the court to block any further data sharing and to demand that DOGE immediately destroy any and all copies of downloaded material. You can view the press release for this case here. Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management
Digital security training can feel overwhelming, and not everyone will have access to new apps, new devices, and new tools. There also isn't one single system of digital security training, and we can't know the security plans of everyone we communicate with—some people might have concerns about payment processors preventing them from obtaining fees for their online work, whilst others might be concerned about doxxing or safely communicating sensitive medical information. This is why good privacy decisions begin with proper knowledge about your situation and a community-oriented approach. To start, explore the following questions together with your friends and family, organizing groups, and others: What do we want to protect? This might include sensitive messages, intimate images, or information about where protests are organized. Who do we want to protect it from? For example, law enforcement or stalkers. How much trouble are we willing to go through to try to prevent potential consequences? After all, convincing everyone to pivot to a different app when they like their current service might be tricky! Who are our allies? Besides those who are collaborating with you throughout this process, it’s a good idea to identify others who are on your side. Because they’re likely to share the same threats you do, they can be a part of your protection plans. This might seem like a big task, so here are a few essentials: Use Secure Messaging Services for Every Communication Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption, ensuring that only the sender and recipient of any communication have access to the content. But this protection does not reach its full potential without others joining you in communicating on these platforms. Of the most common messaging apps, Signal provides the most extensive privacy protections through its use of end-to-end encryption, and is available for download across the globe. But we know it might not always be possible to encourage everyone in your network to transition away from their current services. There are alternatives, though. WhatsApp, one of the most popular communication platforms in the world, uses end-to-end encryption, but collects more metadata than Signal. Facebook Messenger now also provides end-to-end encryption by default in one-on-one direct messages. Specific privacy concerns remain with group chats. Facebook Messenger has not enabled end-to-end encryption for chats that include more than two people, and popular platforms like Slack and Discord similarly do not provide these protections. These services may appear more user-friendly in accommodating large numbers, but in the absence of real privacy protections, make sure you consider what is being communicated on these sites and use alternative messaging services when talking about sensitive topics. As a service's user base gets larger and more diverse, it's less likely that simply downloading and using it will indicate anything about a particular user's activities. For example, the more people use Signal, the less those seeking reproductive health care or coordinating a protest would stand out by downloading it. So beyond protecting just your communications, you’re building up a user base that can protect others who use encrypted, secure services and give them the shield of a crowd. It also protects your messages from being available for law enforcement should they request it from the platforms you use. In choosing a platform that protects our privacy, we create a space from safety and authenticity away from government and corporate surveillance. For example, prosecutors in Nebraska used messages sent via Facebook Messenger (prior to the platform enabling end-to-end encryption by default) as evidence to charge a mother with three felonies and two misdemeanors for assisting her daughter with an abortion. Given that someone known to the family reported the incident to law enforcement, it’s unlikely using an end-to-end encrypted service would have prevented the arrest entirely, but it would have prevented the contents of personal messages turned over by Meta from being used as evidence in the case. Beyond this, it's important to know the privacy limitations of the platforms you communicate on. For example, while a secure messaging app might prevent government and corporate eavesdroppers from snooping on conversations, that doesn't stop someone you're communicating with from taking screenshots, or the government from attempting to compel you (or your contact) to turn over your messages yourselves. Secure messaging apps also don't protect when someone gets physical access to an unlocked phone with all those messages on it, which is why you may want to consider enabling disappearing message features for certain conversations. Consider The Content You Post On Social Media We’re all interconnected in this digital age. Even without everyone having access to their own personal device or the internet, it is pretty difficult to completely opt out of the online. One person’s decision to upload a picture to a social media platform may impact another person without the second even knowing it, such as an association with a movement or a topic that you don’t want to be public knowledge. Talk with your friends about the potentially sensitive data you reveal about each other online. Even if you don’t have a social media account, or if you untag yourself from posts, friends can still unintentionally identify you, report your location, and make their connections to you public. This works in the offline world too, such as sharing precautions with organizers and fellow protesters when going to a demonstration, and discussing ahead of time how you can safely document and post the event online without exposing those in attendance to harm. It’s important to carefully consider the tradeoffs between publicity and privacy when it comes to social media. If you’re promoting something important that needs greater reach, it may be more worth posting to the more popular platforms that undermine user privacy. To do so, it’s vital that you compartmentalize your personal information (registration credentials, post attribution, friends list, etc) away from these accounts. If you are organising online or conversing on potentially sensitive issues, choose platforms that limit the amount of information collected and tracking undertaken. We know this is not always possible—perhaps people cannot access different applications, or might not have interest in downloading or using a different service. In this scenario, think about how you can protect your community on the platform you currently engage on. For example, if you currently use Facebook for organizing, work with others to keep your Facebook groups as private and secure as Facebook allows. Think About Cloud Servers as Other People’s Computers For our online world to function, corporations use online servers (often referred to as the cloud) to store the mass amounts of data collected from our devices. When we back up our content to these cloud services, corporations may run automated tools to check the content being stored, including scanning all our messages, pictures, and videos. The best case scenario in the event of a false flag is that your account is temporarily blocked, but worst case could see your entire account deleted and/or legal action initiated for perceivably illegal content. For example, in 2021 a father took pictures of son’s groin area and sent these to a health care provider’s messaging service. Days later, his Google account was disabled because the photos constituted a “a severe violation of Google’s policies and might be illegal,” with an attached link flagging “child sexual abuse and exploitation” as one of the possible reasons. Despite the photos being taken for medical purposes, Google refused to reinstate the account, meaning that the father lost access to years of emails, pictures, account login details, and more. In a similar case, a father in Houston took photos of his child’s infected intimate parts to send to his wife via Google’s chat feature. Google refused to reinstate this account, too. The adage goes, “there are no clouds, just other peoples’ computers.” It’s true! As countless discoveries over the years have revealed, the information you share on Slack at work is on Slack's computers and made accessible to your employer. So why not take extra care to choose whose computers you’re trusting with sensitive information? If it makes sense to back up your data onto encrypted thumb drives or limited cloud services that provide options for end-to-end encryption, then so be it. What’s most important is that you follow through with backing it up. And regularly! Assign Team Roles Adopting all of these best practices can be daunting, we get it. Every community is made up of people with different strengths, so with some consideration you can make smart decisions about who does what for the collective privacy and security. Once these tasks are broken down into smaller, more easily done tasks, it’s easier for a group to accomplish together. As familiarity with these tasks grows, you’ll realize you’re developing a team of experts, and after some time, you can teach each other. Create Incident Response Plans Developing a plan for if or when something bad happens is a good practice for anyone, but especially a community of people who face increased risk. Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies around what to do in the event of such things happening. Doing so before an incident occurs is much easier than when you’re presently facing a crisis. Only you and your allies can decide what belongs on such a plan, but some strategies might be: Isolating the impacted areas, such as shutting down social media accounts and turning off affected devices Notifying others who may be affected Switching communications to a predetermined more secure alternative Noting behaviors of suspected threats and documenting these Outsourcing tasks to someone further from the affected circle who is already aware of this potential responsibility. Everyone's security plans and situations will always be different, which is why we often say that security and privacy are a state of mind, not a purchase. But the first step is always taking a look at your community and figuring out what's needed and how to get everyone else on board.
Most of the internet’s blessings—the opportunities for communities to connect despite physical borders and oppressive controls, the avenues to hold the powerful accountable without immediate censorship, the sharing of our hopes and frustrations with loved ones and strangers alike—tend to come at a price. Governments, corporations, and bad actors too often use our content for surveillance, exploitation, discrimination, and harm. It’s easy to dismiss these issues because you don’t think they concern you. It might also feel like the whole system is too pervasive to actively opt-out of. But we can take small steps to better protect our own privacy, as well as to build an online space that feels as free and safe as speaking with those closest to us in the offline world. This is why a community-oriented approach helps. In speaking with your friends and family, organizing groups, and others to discuss your specific needs and interests, you can build out digital security practices that work for you. This makes it more likely that your privacy practices will become second nature to you and your contacts. Good privacy decisions begin with proper knowledge about your situation—and we’ve got you covered. To learn more about building a community privacy plan, read our ‘how to’ guide here, where we talk you through the topics below in more detail: Using Secure Messaging Services For Every Communication At some point, we all need to send a message that’s safe from prying eyes, so the chances of these apps becoming the default for sensitive communications is much higher if we use these platforms for all communications. On an even simpler level, it also means that messages and images sent to family and friends in group chats will be safe from being viewed by automated and human scans on services like Telegram and Facebook Messenger. Consider The Content You Post On Social Media Our decision to send messages, take pictures, and interact with online content has a real offline impact, and whilst we cannot control for every circumstance, we can think about how our social media behaviour impacts those closest to us, as well as those in our proximity. Think About Cloud Servers as Other People’s Computers When we backup our content to online cloud services, corporations may run automated tools to check the content being stored, including scanning all our messages, pictures, and videos. Whilst we might think we don't have anything to hide, these tools scan without context, and what might be an innocent picture to you may be flagged as harmful or illegal by a corporation's service. So why not take extra care to choose whose computers you’re entrusting with sensitive information. Assign Team Roles Once these privacy tasks are broken down into smaller, more easily done projects, it’s much easier for a group to accomplish together. Create Incident Response Plans Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies what to do in such circumstances. Doing so before an incident occurs is much easier than on the fly when you’re already facing a crisis. To dig in deeper, continue reading in our blog post Building a Community Privacy Plan here.
Ever since Chat-GPT’s debut, artificial intelligence (AI) has been the center of worldwide discussions on the promises and perils of new technologies. This has spawned a flurry of debates on the governance and regulation of large language models and “generative” AI, which have, among others, resulted in the Biden administration’s executive order on AI and international guiding principles for the development of generative AI and influenced Europe’s AI Act. As part of that global policy discussion, the UK government hosted the AI Safety Summit in 2023, which was followed in 2024 by the AI Seoul Summit, leading up to this year’s AI Action Summit hosted by France. As heads of states and CEOs are heading to Paris for the AI Action Summit, the summit’s shortcomings are becoming glaringly obvious. The summit, which is hosted by the French government, has been described as a “pivotal moment in shaping the future of artificial intelligence governance”. However, a closer look at its agenda and the voices it will amplify tells a different story. Focusing on AI’s potential economic contributions, and not differentiating between for example large language models and automated decision-making, the summit fails to take into account the many ways in which AI systems can be abused to undermine fundamental rights and push the planet's already stretched ecological limits over the edge. Instead of centering nuanced perspectives on the capabilities of different AI systems and associated risks, the summit’s agenda paints a one-sided and simplistic image, not reflective of global discussion on AI governance. For example, the summit’s main program does not include a single panel addressing issues related to discrimination or sustainability. A summit captured by industry interests cannot claim to be a transformative venue This imbalance is also mirrored in the summit’s speakers, among which industry representatives notably outnumber civil society leaders. While many civil society organizations are putting on side events to counterbalance the summit’s misdirected priorities, an exclusive summit captured by industry interests cannot claim to be a transformative venue for global policy discussions. The summit’s significant shortcomings are especially problematic in light of the leadership role European countries are claiming when it comes to the governance of the AI. The European Union’s AI Act, which recently entered into force, has been celebrated as the world’s first legal framework addressing the risks of AI. However, whether the AI Act will actually “promote the uptake of human centric and trustworthy artificial intelligence” remains to be seen. It's unclear if the AI Act will provide a framework that incentivizes the roll out of user-centric AI tools or whether it will lock-in specific technologies at the expense of users. We like that the new rules contain a lot of promising language on fundamental rights protection, however, exceptions for law enforcement and national security render some of the safeguards fragile. This is especially true when it comes to the use of AI systems in high-risks contexts such as migration, asylum, border controls, and public safety, where the AI Act does little to protect against mass surveillance and profiling and predictive technologies. We are also concerned by the possibility that other governments will copy-paste the AI Act’s broad exceptions without having the strong constitutional and human rights protections that exist within the EU legal system. We will therefore keep a close eye on how the AI Act is enforced in practice. The summit also lags in addressing the essential role human rights should play in providing a common baseline for AI deployment, especially in high-impact uses. Although human-rights-related concerns appear in a few sessions, the Summit as purportedly a global forum aimed at unleashing the potential of AI for the public good and in the public interest, at a minimum, seems to miss the opportunity to clearly articulate how such a goal connects with fulfilling international human rights guarantees and which steps this entail. Countries must address the AI divide without replicating AI harms. Ramping up government use of AI systems is generally a key piece in national strategies for AI development worldwide. While countries must address the AI divide, doing so must not mean replicating AI harms. For example, we’ve elaborated on leveraging Inter-American human rights standards to tackle challenges and violations that emerge from public institutions’ use of algorithmic systems for rights-affecting determinations in Latin America. In times of a global AI arms race, we do not need more hype for AI. Rather, there is a crucial need for evidence-based policy debates that address AI power centralization and consider the real-world harms associated with AI systems—while enabling diverse stakeholders to engage at eye level. The AI Action Summit will not be the place to have this conversation.
The Washington Post reported that the United Kingdom is demanding that Apple create an encryption backdoor to give the government access to end-to-end encrypted data in iCloud. Encryption is one of the best ways we have to reclaim our privacy and security in a digital world filled with cyberattacks and security breaches, and there’s no way to weaken it in order to only provide access to the “good guys.” We call on Apple to resist this attempt to undermine the right to private spaces and communications. As reported, the British government’s undisclosed order was issued last month, and requires the capability to view all encrypted material in iCloud. The core target is Apple’s Advanced Data Protection, which is an optional feature that turns on end-to-end encryption for backups and other data stored in iCloud, making it so that even Apple cannot access that information. For a long time, iCloud backups were a loophole for law enforcement to gain access to data otherwise not available to them on iPhones with device encryption enabled. That loophole still exists for anyone who doesn’t opt in to using Advanced Data Protection. If Apple does comply, users should consider disabling iCloud backups entirely. Perhaps most concerning, the U.K. is apparently seeking a backdoor into users’ data regardless of where they are or what citizenship they have. There is no technological compromise between strong encryption that protects the data and a mechanism to allow the government special access to this data. Any “backdoor” built for the government puts everyone at greater risk of hacking, identity theft, and fraud. There is no world where, once built, these backdoors would only be used by open and democratic governments. These systems can be, and quickly will be, used by more repressive governments around the world to read protesters’ and dissenters’ communications. We’ve seen and opposed these sorts of measures for years. Now is no different. Perhaps most concerning, the U.K. is apparently seeking a backdoor into users’ data regardless of where they are or what citizenship they have. Of course, Apple is not the only company who uses end-to-end encryption. Some of Google’s backup options employ similar protections, as do many chat apps, cloud backup services, and more. If the U.K. government secures access to the encrypted data of Apple users through a backdoor, every other secure file-sharing, communication, and backup tool is at risk. Meanwhile, in the U.S., just last year we had a top U.S. cybersecurity chief declare that “encryption is your friend,” taking a welcome break from the messaging we’ve seen over the years at EFF. Even the FBI, which has frequently pushed for easier access to data by law enforcement, issued the same recommendation. There is no legal mechanism for the U.S. government to force this same sort of rule on Apple, and we’d hope to see Apple continue to resist it as they have in the past. But what happens in the U.K. will still affect users around the world, especially as the U.K. order specifically stated that Apple would be prohibited from warning its users that its Advanced Data Protection measures no longer work as initially designed. Weakening encryption violates fundamental human rights and annihilates our right to private spaces. Apple has to continue fighting against this ruling to keep backdoors off users’ devices.
Minors, like everyone else, have First Amendment rights. These rights extend to their ability to use social media both to speak and access the speech of others online. But these rights are under attack, as many states seek to limit minors’ use of social media through age verification measures and outright bans. California’s SB 976, or the Protecting Our Kids from Social Media Addiction Act, prohibits minors from using a key feature of social media platforms—personalized recommendation systems, or newsfeeds. This law impermissibly burdens minors’ ability to communicate and find others’ speech on social media. On February 6th, 2025, EFF, alongside the Freedom to Read Foundation and Library Futures, filed a brief in the Ninth Circuit Court of Appeals in NetChoice v. Bonta urging the court to overturn the district court decision partially denying a preliminary injunction of SB 976. SB 976 passed into law in September of 2024, and prohibits various online platforms from providing personalized recommendation systems to minors without parental consent. For now, this prohibition only applies where the platforms know a user is a minor. Starting in 2027, however, the platforms will need to estimate the age of all their users based on regulations promulgated by the California attorney general. This means that (1) all users of platforms with these systems will need to pass through an age gate to continue using these features, and (2) children without parental consent will be denied access to the protected speech that is organized and distributed via newsfeeds. This is separate from the fact that feeds are central to most platforms’ user experience, and it’s not clear how social media platforms can or will adapt the experience for young people to comply with this law. Because these effects burden both users and platforms’ First Amendment rights, EFF filed this friend-of-the-court brief. This work is part of our broader fight against similar age-verification laws at the state and federal levels. EFF got involved in this suit both to advocate for the First Amendment rights of adult and minor users and to correct the dangerous logic by the district court. The district court, hearing NetChoice’s challenge on behalf of online platforms, ruled that the personalized feeds covered by SB 976 are not expressive, and therefore not covered by the First Amendment. The lower court took an extremely narrow view of what constitutes expressive activity, writing that algorithms behind personalized newsfeeds don’t reflect the messages or editorial choices of their human creators and therefore do not trigger First Amendment scrutiny. The Ninth Circuit has since stayed the district court’s ruling, preliminarily blocking the law from taking effect until it has a chance to consider the issues. EFF pushed back on this flawed reasoning, arguing that “the personalized feeds targeted by SB 976 are inherently expressive, because they (1) reflect the choices made by platforms to organize content on their services, (2) incorporate and respond to the expression users create to distribute users’ speech, and (3) provide users with the means to access speech in a digestible and organized way.” Moreover, the presence of these personalized recommendation systems informs the speech that users create on platforms, as users often create content with the intent of it getting “picked up” by the algorithm and delivered to other users. SB 976 burdens the First Amendment rights of minor social media users by blocking their use of primary systems created to distribute their own speech and to hear others’ speech via those systems, EFF’s brief argues. The statute also burdens all internet users’ First Amendment rights because the age-verification scheme it requires will block some adults from accessing lawful speech, make it impossible for them to speak anonymously on these services, and increase their risk of privacy invasions. Under the law, adults and minors alike will need to provide identifying documents to prove their age, which chills users of any age who wish to remain anonymous from accessing protected speech, excludes adults lacking proper documentation, and exposes those who do share their documentation to data breaches or sale of their data. We hope the Ninth Circuit recognizes that personalized recommendation systems are expressive in nature, subjects SB 976 to strict scrutiny, and rejects the district court ruling. Related Cases: NetChoice Must-Carry Litigation
You can subscribe to this RSS to get more information