The Digital Justice Initiative works at the intersection of racial justice and technology, data, and privacy. Predatory commercial data practices and invasions of privacy have disparate impacts on communities of color, especially African Americans, immigrants, women of color, and LGBTQ people of color.
For more information, feel free to email us at [email protected].
Our Most Recent Work:
On Oct. 1, David Brody, counsel and senior fellow for Privacy and Technology, participated in a New American Open Technology Institute virtual panel, where he discussed election disinformation on internet platforms. Following the 2016 election, internet platforms have come under increased scrutiny for how they prevent election misinformation from spreading. As the 2020 election nears, concerns that foreign and domestic actors are using these platforms to spread misinformation is once again front-and-center. Brody had four main points:
- The rapid spread of misinformation comes primarily from domestic actors, and disproportionately from President Trump and his supporters.
- The platforms are failing to protect the right to vote. By continuing to allow and amplify false information, the legitimacy of our election is under attack. Instead, they could ensure they are promoting the security of the election, the security of vote-by-mail, and the nonexistence of voter fraud.
- Everyone should tune out social media until after the election, because it is doing more harm than good.
- Any time you see something online that immediately energizes you and makes you want to share it right away, take a break, pause, and reflect on it. That immediate visceral reaction is a warning sign. False and misleading stories are designed to exploit your emotions to get quick amplification. If the story true and important, it will still be true and important in ten minutes.
To view the full event, please click here.
Black Lives Matter D.C. v. Trump
The Lawyers’ Committee for Civil Rights Under Law filed suit on behalf of Black Lives Matter D.C. against President Trump, Attorney General Barr, and other federal officials for unlawfully attacking peaceful demonstrators outside the White House. The crowd, protesting the killing of George Floyd, was assaulted with tear gas and rubber bullets to clear the way for a photo op for the President.
- Read the Lawyers’ Committee press release about the lawsuit.
- Read the complaint filed in United States District Court for the District of Columbia.
Discriminatory Denial of Service: Applying State Public Accommodations Laws to Online Commerce
When a business posts a sign that says, “Whites Only,” it should not matter whether it is written in ink or pixels. The discrimination and harm are the same. However, laws prohibiting discrimination in such “places of public accommodation” vary significantly from one state to another. The Lawyers’ Committee has released a report that reviews the laws of all 50 states and the District of Columbia to assess whether each state’s anti-discrimination statute applies to entities operating through the Internet.
Public accommodation statutes are a cornerstone of civil rights law. In general, a place of public accommodations is a business or other entity that offers goods or services to the general public. However, because states generally enacted public accommodations laws before the invention of the Internet, it can be unclear whether and how they apply to online commerce. Absent anti-discrimination protections, online businesses can refuse service on the basis of race or ethnicity, charge higher prices based on religion, provide subpar products based on gender or sexual orientation, or ignore the accessibility needs of persons with disabilities.
Discriminatory Denial of Service: Applying State Public Accommodations Laws to Online Commerce is a comprehensive survey of state public accommodations laws and their applicability to online entities. The report is available for download as a pdf, or find more information about individual states here.
Commercial Data Practices & Consumer Privacy
Personal data can be misused, intentionally or unintentionally, to harm marginalized communities in numerous ways, including:
- Deceptive voter suppression and disinformation targeting communities of color;
- Discrimination and predatory marketing in employment, housing, lending, education, healthcare, and insurance opportunities through profiling and digital redlining;
- Exploitation through deceptive privacy policies & click-through agreements;
- Discriminatory government surveillance and policing practices;
- Discrimination in online retail and other online places of public accommodation;
- Amplification of white supremacy, anti-Semitism, Islamophobia, sexism, homophobia, and transphobia through unaccountable online platforms and recommendation engines;
- Perpetuation of systemic biases through poorly designed machine learning algorithms;
- Threats to the physical safety of vulnerable populations from careless practices that non-consensually reveal sensitive data;
- Disparities in digital equity, such as access to broadband and online privacy protections; and
- Greater vulnerability of low income communities to harms from identity theft.
Online Civil Rights and Privacy Act
The Digital Justice Initiative and Free Press Action wrote model federal legislation for Congress to consider in its ongoing online privacy debates. This bill is a comprehensive data protection act that builds on decades of civil rights law and experience. Read the bill here and a section-by-section summary here.
Read more about online civil rights legislation:
- The “Public Health Emergency Privacy Act” (PHEPA), endorsed by the Lawyers’ Committee, helps to protect against the risks of widespread surveillance by requiring that the use of COVID tracking tools is voluntary, non-discriminatory, and limited in scope.
- Letter to Congress from Civil Rights, Civil Liberties, and Consumer Groups Calling for Anti-Discrimination Protections in Federal Privacy Legislation (April 4, 2019).
- The Lawyers’ Committee supports the Consumer Online Privacy Rights Act and the Privacy Bill of Rights Act, both of which contain significant protections against the discriminatory use of personal data.
- Kristen Clarke and David Brody, “It’s time for an online Civil Rights Act,” The Hill (Aug. 3, 2018).
The Digital Justice Initiative actively investigates, litigates, and advocates against white supremacy and other forms of hate using the Internet to target African Americans and other marginalized communities. Social media and other Internet-based services have a responsibility to protect their users and customers from targeted threats, harassment, and intimidation on the basis of race or other protected characteristics. There are serious real world harms from online hate, including chilling effects on equal participation in online commerce and social activities; economic and reputational injuries to minority-owned businesses; psychological injuries from threats of violence, rape, and death; living in fear for one’s personal safety; and the instigation of hate crimes. Many of the harmful behaviors that occur online every day would not be tolerated if they occurred in a restaurant, bar, hotel, store, or public park. It is time to hold online platforms to the same standard as brick-and-mortar venues for commerce and social life.
Last month, the FCC imposed a nearly $13 million fine on a neo-Nazi who sent racist and anti-Semitic robocalls targeting communities of color and minority political candidates, including Stacey Abrams, Andrew Gillum, and Diane Feinstein. Our Digital Justice team brought this matter to the attention of the FCC and provided briefing on the neo-Nazi robocaller’s operations, why their hateful nature caused more harm than typical robocalls, and how he violated FCC regulations. To the best of our knowledge, this is the first time the FCC has brought an enforcement action against someone using telecommunications unlawfully to terrorize communities of color, and certainly one of the largest civil penalties ever imposed on an individual white supremacist by a federal agency.
Dumpson v. Ade
On May 1, 2017, Taylor Dumpson was inaugurated as the first female African American student government president for American University in Washington, DC. The following day, someone hung nooses around campus in a hate crime targeting her. Shortly thereafter, Andrew Anglin wrote a racist article targeting Ms. Dumpson on his neo-Nazi website, The Daily Stormer–at the time the most popular white supremacist website. Anglin incited his followers to threaten and harass Ms. Dumpson on social media and they obliged, deluging her with intimidation. Ms. Dumpson feared for her physical safety, she was afraid to walk around her campus at night, her academic studies were impaired, and she was ultimately diagnosed with Post-Traumatic Stress Disorder.
In a lawsuit brought by the Lawyers’ Committee on Ms. Dumpson’s behalf, the court held–for the first time anywhere in the country–that this online harassment campaign unlawfully interfered with Ms. Dumpson’s right to equal enjoyment of public accommodations. A “place of public accommodation” is a business or other entity (in this case her university) that offers goods or services to the general public. The court awarded over $725,000.
CNN news report on the Taylor Dumpson case:
Read the Dumpson v. Ade court documents:
Opinion (Aug. 9, 2019)
Motion for Default Judgment (Apr. 29, 2019)
Amended Complaint (Sept. 21, 2019)
Change the Terms
The Lawyers’ Committee is a principal member of the Change the Terms Coalition, which seeks to help tech companies stop white supremacists from using their services to engage in hateful activities. All online platforms, especially social media companies, must protect their users from hateful activities and the serious real world harm caused by online hate.
Facial Recognition Technology
The Digitial Justice Initiative was invited to submit comments to the Privacy and Civil Liberties Oversight Board (PCLOB) ahead of their roundtable about the use of facial recognition technology (FRT) for aviation security. The Lawyers’ Committee is gravely concerned about the misuse of FRT. People of color suffer disproportionate harms from domestic surveillance, both because surveillance technologies are frequently infected with biases and because they are often used in a discriminatory fashion.
Read our full comments to PCLOB.
Freedom Watch v. Google
The Digital Justice Initiative filed an amicus brief in the Court of Appeals for the D.C. Circuit, arguing that the District of Columbia Human Rights Act does not exempt online businesses from its public accommodations protections. Plaintiffs alleged that Google, Facebook, Twitter, and Apple had unlawfully discriminated against them. The District Court dismissed the claims, incorrectly holding that the Act does not apply to online businesses. Our brief explained the errors in the trial court’s decision and the importance of public accommodations laws for guaranteeing civil rights. If the district court decision is affirmed, online businesses would be able to discriminate freely under D.C. law. The Washington Lawyers’ Committee for Civil Rights and Urban Affairs worked with us on this amicus.
Update: The United States Court of Appeals for the District of Columbia recently issued its decision in Freedom Watch, holding that places of public accommodation under the D.C. Human Rights Act must operate from a physical location.
The Digital Justice Initiative is actively investigating and building cases seeking to stop the use of personal data to discriminate in advertisements for housing, employment, credit, education, or insurance. Online advertising microtargeting practices have the ability to exclude entire classes of individuals from economic opportunities without any way for these individuals to know they are being discriminated against. The Digital Justice Initiative collaborates with the Economic Justice Project, Fair Housing and Community Development Project, and other internal and external partners to address discriminatory online advertising.
Recently, the Digital Justice Initiative submitted an amicus brief in the United States District Court for the Northern District of California in support of the plaintiff in Opiotennione v. Facebook, Inc., arguing that Facebook engages in unlawful online segregation and redlining.
Online Voter Suppression
The Digital Justice Initiative actively fights against online voter suppression and election interference targeting African Americans and other marginalized communities. The Senate Intelligence Committee, the Mueller Investigation, and the Intelligence Community all concluded that Russia targeted African Americans with an online voter suppression campaign in the 2016 election, and the threat is ongoing–both from foreign and domestic bad actors. We work with Election Protection to monitor for voter suppression activity online, report it to the relevant online platforms for enforcement, and, when necessary, bring litigation in conjunction with our Voting Rights Project.
The Digital Justice Initiative regularly meets with major tech companies to push them to increase protections for their users from discriminatory uses of their data, voter suppression and election interference, and hateful harassment and threats from white supremacists. The Lawyers’ Committee, for example, convinced Facebook to ban white nationalism after months of sustained advocacy. We maintain ongoing conversations with platforms such as Facebook, Google & YouTube, Reddit, and Twitter. In particular, Digital Justice works with Election Protection to report online voter suppression incidents to the platforms.
Open Letter to Mark Zuckerberg: Facebook, Protect Civil Rights or You Could Face Lawsuits
In November 2019, the Lawyers’ Committee sent an open letter to Mark Zuckerberg, CEO of Facebook, warning him of the company’s legal liability for civil rights violations. Read the letter here.