threats against Trump critics

Inside the Pressure Machine: Investigating the Intimidation, and threats against Trump Critics

Introduction: When Speaking Out Comes With a Cost

In the past several years, one phrase has appeared again and again across interviews, court transcripts, opinion essays, and congressional hearings: “I spoke up — and then the threats started.” This pattern is especially visible among people who have publicly disagreed with or investigated former President Donald Trump. The threats against Trump critics—whether online abuse, doxxing, legal intimidation, or political pressure—have become a defining feature of the modern political climate. But how did disagreement become dangerous? Why do so many whistleblowers, election workers, judges, journalists, and former administration officials say they experienced harassment after breaking ranks? And what does this intimidating ecosystem reveal about vulnerability, power, and civic courage in a polarized era? This investigation explores the structures, networks, media environments, and cultural feedback loops that contribute to the pressure — and how these forces shape public behavior, silence dissent, and test the foundations of American democracy.

Understanding the Ecosystem of Pressure: What Drives Threats Against Trump Critics?

While no single organization “coordinates” threats, researchers and journalists have documented converging dynamics that create an intimidating environment for dissenters around high-profile political figures.

These forces include:

  • Massive online communities mobilized by political messaging
  • Hyper-partisan media amplification
  • Social media algorithms that reward outrage
  • Influencers who name, target, or mock critics
  • Political rhetoric that frames dissent as betrayal
  • Anonymous online actors willing to escalate to threats

The result is not a traditional conspiracy.
It is an ecosystem — a decentralized pressure machine in which political statements, viral posts, and televised commentary can trigger waves of harassment or scrutiny.

Case Study #1: Election Workers Under Attack

One of the most widely documented examples involves local election workers after the 2020 election.

The Example of Ruby Freeman & Shaye Moss (Georgia)

When Trump and some allies promoted false claims about vote manipulation in Georgia, two poll workers — Shaye Moss and her mother, Ruby Freeman — became the center of national harassment.

According to sworn congressional testimony and reporting from outlets such as The New York Times and Reuters:

  • Their names and images circulated across social platforms.
  • They received thousands of threats.
  • Anonymous callers warned them they would be lynched.
  • People showed up outside their homes.
  • Both women had to temporarily relocate for safety.

Moss testified: “I have never been so scared in my life. I don’t go anywhere without looking over my shoulder.” This wasn’t orchestrated by a single “network” but grew from a chain reaction:

  1. Public accusations →
  2. Viral amplification →
  3. Social media mobilization →
  4. Real-world threats

This sequence recurs in multiple cases involving critics, investigators, public servants, and political dissenters.

Case Study #2: Judges and Prosecutors Facing Threats After High-Profile Investigations

Judges, prosecutors, and their families have increasingly faced harassment following decisions or investigations involving Trump.

Documented Examples:

  • Judges presiding over Trump-related cases reporting heightened security needs
  • Prosecutors receiving threats and online abuse after filing charges
  • Court staff being doxxed on anonymous forums
  • Sheriffs’ offices warning about violent rhetoric spreading online

These incidents have been noted in public safety bulletins, media reports, and legal filings—not as political claims, but as documented realities. The Department of Homeland Security, in various public advisories, has described politically motivated threats against public officials as a growing concern across multiple ideological groups.

Case Study #3: Former Administration Officials Who Broke Ranks

Former Trump advisers, cabinet members, and officials who later disagreed with him publicly often describe facing:

  • Online harassment
  • Threats from anonymous accounts
  • Intense backlash from partisan media followers
  • Pressure campaigns labeling them “traitors” or “disloyal”

Several well-known officials have stated in interviews that speaking out required security measures or personal caution.

These stories highlight a political culture of retaliation where criticism is reframed as treason — amplifying the pressure to stay silent.

How Pressure Campaigns Function: A Journalistic Breakdown

The threats against Trump critics follow consistent patterns. Below is a table summarizing common mechanisms, based on public reporting and social-media research.


📊 Table: The Pressure Machine — Common Patterns of Harassment

MechanismHow It WorksImpact on Critics
Public namingA figure criticizes an institution or individual on social media or in interviews.Sudden spikes in harassment, doxxing, and online mobs.
Viral outrage cyclesA clip is circulated across partisan platforms.Thousands of angry comments and reposts intensify the target’s visibility.
Media amplificationPartisan outlets repeat the messaging.Audience segments mobilize around perceived “enemies.”
Anonymous escalationUnidentified actors post threats or personal info.Targets experience fear, must increase security, or withdraw from public life.
Political framingCritics are labeled as corrupt, disloyal, or dangerous.Public perception shifts, and professional consequences follow.

No single individual controls this system — but high-profile commentary often triggers predictable responses across digital environments.

The Psychology Behind the Pressure: Why Outrage Travels Fast

Researchers studying online harassment point to several factors that intensify pressure on political critics:

1. Identity-driven politics

Supporters may interpret criticism of a leader as a personal attack on themselves, escalating emotional reactions.

2. Digital mob behavior

People act more aggressively when anonymous and part of a large group.

3. Algorithmic rewards

Anger and sensational content spread faster because platforms prioritize engagement.

4. Polarization-driven framing

Opposition is cast as betrayal, not disagreement.

These dynamics help explain why even small public comments can unleash massive harassment waves.

Real-World Impact: Silencing, Fear, and Withdrawal

Threats against Trump critics — and political critics of any high-profile figure — have tangible consequences:

• Professionals leaving public service

Election workers, school board members, and local officials have resigned in large numbers citing harassment.

• Reduced willingness to testify or speak publicly

Fear of retaliation discourages transparency.

• Damage to democratic participation

People avoid civic engagement if participation invites threats.

• Polarization that becomes self-reinforcing

When moderate voices withdraw, more extreme voices dominate the conversation.

This is not an issue unique to Trump — but his highly mobilized supporter base, amplified by partisan media and algorithmic incentives, has made the phenomenon especially intense in his orbit.

Media Ecosystems That Amplify Pressure

A crucial part of this story involves the media environments that shape public behavior.

1. Social Media Platforms

Platforms like X (Twitter), Facebook, Truth Social, TikTok, and YouTube:

  • Amplify emotionally charged content
  • Allow rapid mobilization
  • Host anonymous communities where threats proliferate
  • Spread viral memes and misinformation

2. Hyper-partisan Media

Some outlets frame dissent as betrayal or corruption, which can intensify anger among supporters.

3. Influencers and Online Personalities

Large accounts can rapidly bring attention — and pressure — to specific individuals through commentary or mockery. Together, these networks create a landscape where a simple post can lead to real-world danger for individuals named in political disputes.

Can It Be Proven That These Actions Are Coordinated?

Legally and journalistically, it is important to avoid claiming explicit “coordination” without evidence. What exists, according to researchers, is a “convergence”:

  • Rhetoric signals a target
  • Media amplifies the signal
  • Online communities react
  • Anonymous threats escalate

This system behaves like a coordinated pressure network, but functions through decentralized social dynamics, not centralized planning. This distinction matters for accuracy. The intimidation is real — the mechanism is cultural, technological, and political, not conspiratorial.

The Courage of Those Who Speak Out

Despite the risks, many individuals continue to speak publicly. These include:

  • Local election workers
  • Former administration advisors
  • Military veterans
  • Journalists
  • Judges and legal professionals
  • Civic volunteers
  • Everyday citizens

Their ongoing willingness to speak up provides an essential counterbalance to fear-driven silence. One election supervisor said in an interview: “I stayed because democracy only works if regular people refuse to be intimidated.” Their resilience matters — for society, governance, and public trust.

How Citizens Can Respond: Building a Culture That Rejects Intimidation

1. Support Threatened Public Servants

Share verified information; avoid spreading personal details; promote respectful discourse.

2. Demand More Responsible Political Rhetoric

Hold leaders accountable for language that could endanger private citizens.

3. Advocate for Stronger Safety and Oversight Measures

Public institutions need updated threat assessment and protection mechanisms.

4. Strengthen Media Literacy

Help communities identify manipulated outrage and misinformation.

5. Encourage Civic Participation

Democracy depends on ordinary people refusing to be bullied out of public life.

Conclusion: Breaking the Cycle of Intimidation

The threats against Trump critics—and political critics in general—reveal a fundamental tension in American democracy:

Can a society remain free when disagreement carries personal danger?

This is not a partisan question. It is about ensuring that every citizen — regardless of party — has the right to speak, serve, testify, vote, and participate without fear. The pressure machine thrives on silence.
It grows powerful when people retreat.

But it weakens when citizens refuse to be intimidated, when institutions protect those who serve them, and when communities recognize that dissent is not disloyalty — it is democracy’s heartbeat.

Call to Action

If you believe in protecting dissent, supporting public servants, and defending democratic norms:
Share this article, start the conversation, and help build a safer civic space.

Your voice matters. Silence helps intimidation thrive. Speaking up helps democracy survive.

AI-Driven Disinformation Campaigns

The Forces Behind the Onslaught of AI-Driven Disinformation Campaigns: Who Really Benefits?

Introduction: The Ghost in the Machine

Imagine waking up to a world where any voice on the internet—television, social media, news websites—can be manufactured with perfect realism. Not just a deepfake video or a synthetic voice, but whole news sites, bot armies, and even digital operatives generated and controlled by artificial intelligence.

This is not science fiction. Welcome to the new reality of AI-Driven Disinformation Campaigns.

AI is no longer just a technological marvel; it’s becoming a geopolitical weapon. Nations, private operators, and cyber-mercenary firms are leveraging generative AI to produce convincing propaganda, influence elections, and destabilize democracies — all at a scale and speed previously unimaginable.

This investigative article dives into the forces fueling this new wave of disinformation, looks at who profits from it, and explores what this means for global power dynamics. If you believe that disinformation was bad before — think again.

What Makes AI-Driven Disinformation Different—and More Dangerous

To understand the threat, we need to first clarify what sets AI-generated disinformation apart from older propaganda:

  1. Scale & Speed
    Generative AI can produce thousands of articles, tweets, images, and even audio clips in minutes. According to a Frontiers research paper, the number of AI-written fake-news sites grew more than tenfold in just a year. (Frontiers)
  2. Believability
    Deepfake capabilities now include not just video, but lifelike voice cloning. A European Parliament report notes a 118% increase in deepfake use in 2024 alone, especially in voice-based AI scams. (European Parliament)
  3. Automation of Influence Operations
    Disinformation actors are automating entire influence campaigns. Rather than a handful of human propagandists, AI helps deploy bot networks, write narratives, and tailor messages in real time. As PISM’s analysis shows, actors are already using generative models to coordinate bot networks and mass-distribute content. (Pism)
  4. Lower Risk, Higher Access
    AI lowers the bar for influence operations. State and non-state actors alike can rent “Disinformation-as-a-Service” (DaaS) models, making it cheap and efficient to launch campaigns.

Who’s Behind the Campaigns — The Key Players

Understanding who benefits from these campaigns is critical. Below are the main actors driving AI-powered disinformation — and their motivations.

Authoritarian States & Strategic Rivals

  • Russia: Long a pioneer in influence operations, Russia is now using AI to scale its propaganda. In Ukraine and Western Europe, Russian-linked operations such as the “Doppelgänger” campaign mimic real media outlets using cloned websites to spread pro-Kremlin narratives. (Wikipedia)
  • China: Through campaigns like “Spamouflage,” China’s state-linked networks use AI-generated social media accounts to promote narratives favorable to Beijing and harass dissidents abroad. (Wikipedia)
  • Multipolar Cooperation: According to Global Influence Ops reporting, China and Russia are increasingly cooperating in AI disinformation operations that target Western democracies — sharing tools, tech, and narratives. (GIOR)

These states benefit strategically: AI enables scaled, deniable information warfare that can sway public opinion, weaken rival democracies, and shift geopolitical power.

Private Actors & Cyber-Mercenaries

  • Team Jorge: This Israeli cyber-espionage firm has been exposed as running disinformation campaigns alongside hacking and influence operations, including dozens of election manipulation efforts. (Wikipedia)
  • Storm Propaganda Networks: Recordings and research have identified Russian-linked “Storm” groups (like Storm-1516) using AI-generated articles and websites to flood the web with propaganda. (Wikipedia)
  • Pravda Network: A pro-Russian network publishing millions of pro-Kremlin articles yearly, designed to influence training datasets for large language models (LLMs) and steer AI-generated text. (Wikipedia)

These actors make money through contracts, influence campaigns, and bespoke “bot farms” for hire — turning disinformation into a business.

Emerging Threat Vectors and Campaign Styles

AI-driven disinformation isn’t one-size-fits-all. Here are the ways it’s being used today:

Electoral Manipulation

  • Africa: According to German broadcaster DW, AI disinformation is already being used to target election processes in several African nations, undermining trust in electoral authorities. (Deutsche Welle)
  • South America: A report by ResearchAndMarkets predicts a 350–550% increase in AI-driven disinformation by 2026, particularly aimed at social movements, economic policies, and election integrity. (GlobeNewswire)
  • State-Sponsored Influence: Russian and Iranian agencies have allegedly used AI to produce election-related disinformation, prompting U.S. sanctions on groups involved in such operations. (The Verge)

Deepfake Propaganda and Voice Attacks

  • Olympics Deepfake: Microsoft uncovered a campaign featuring a deepfake Tom Cruise video, allegedly produced by a Russia-linked group, to undermine the Paris 2024 Olympics. (The Guardian)
  • Voice Cloning and “Vishing”: Audio deepfakes are now used to impersonate individuals in voice phishing attacks, something the EU Parliament warns is on the rise. (European Parliament)

Training Data Poisoning

Bad actors are intentionally injecting false or extreme content into training datasets for LLMs. These “prompt-injection” or data poisoning attacks aim to subtly twist model outputs, making them more sympathetic to contentious or extreme narratives. (Pism)

H3: Bot Networks & AI-Troll Farms

AI enables the creation of highly scalable, semi-autonomous bot networks. These accounts can generate mass content, interact with real users, and amplify narratives in highly coordinated ways — essentially creating digital echo chambers and artificial viral campaigns.

Who Benefits — And What Are the Risks?

Strategic Advantages for Authoritarian Regimes

  • Plausible Deniability: AI campaign operations can be launched via synthetic accounts, making attribution difficult.
  • Scalable Influence: With AI content generation, propaganda becomes cheap and scalable.
  • Disruptive Power: Democracies become destabilized not by traditional military power but by information warfare that erodes trust.

Profits For Cyber-Mercenaries

Disinformation-as-a-Service (DaaS) firms are likely to be among the biggest winners. These outfits can deploy AI-powered influence operations for governments or commercial clients, charging for strategy, reach, and impact.

Technology Firms’ Double-Edged Role

AI companies are in a precarious position. Their tools are being used for manipulation — but they also build detection systems.

  • Cyabra, for example, provides AI-powered platforms to detect malicious deepfakes or bot-driven narratives. (Wikipedia)
  • Public and private pressure is growing for AI companies to label synthetic content, restrict certain uses, and build models that resist misuse.

Danger to Democracy and Civil Society

  • Erosion of Trust: When citizens can’t trust what they see and hear, institutional legitimacy collapses.
  • Polarization: AI disinformation exacerbates social divisions by hyper-targeting narratives to groups.
  • Manipulation of Marginalized Communities: In regions with weaker media literacy, AI propaganda can have disproportionate effects.

Global Responses and the Road to Resilience

How are governments, institutions, and societies responding — and what should be done?

Policy and Regulation

  • The EU is tightening rules on AI via the AI Act, alongside the Digital Services Act to require transparency and oversight. (Pism)
  • At a 2025 summit, global leaders emphasized the need for international cooperation to regulate AI espionage and disinformation. (DISA)

Tech Countermeasures

  • Develop “content provenance” systems: tools that can reliably detect whether content is AI-generated.
  • Deploy counter-LLMs: AI models that specialize in detecting malicious synthetic media.
  • Use threat intelligence frameworks like FakeCTI, which extract structured indicators from narrative campaigns, making attribution and response more efficient. (arXiv)

Civil Society Action

  • Increase media literacy: Citizens must understand not just what they consume, but who created it.
  • Fund independent fact-checking: Especially in vulnerable regions, real-time verification can beat synthetic content.
  • Support cross-border alliances: Democracy-defense coalitions must monitor and respond to AI influence ops globally.

Conclusion: A New Age of Influence Warfare

We are witnessing the dawn of a new kind of geopolitical contest — not fought in battlegrounds or missile silos, but online, in the heart of information networks.

AI-Driven Disinformation Campaigns represent a paradigm shift:

  • Actors can produce content at scale with unprecedented realism.
  • Influence operations can be automated and highly targeted.
  • Democratic institutions face a stealthy, potent threat from synthetic narratives.

State actors, cyber firms, and opportunistic mercenaries all have a stake — but it’s often the global citizen and the integrity of democracy that pays the highest price.

AI is a tool — and like all tools, its impact depends on who wields it, and how.

Call to Action

  • Share this post with your network: help raise awareness about these hidden AI risks.
  • Stay informed: follow institutions working on AI policy, fact-checking, and digital resilience.
  • Support regulation: advocate for meaningful, global standards on AI to prevent its abuse in disinformation.
  • Educate others: host or join community events, online webinars, and local discussions about media literacy and AI.

The fight for truth in the age of AI is just beginning — and everyone has a part to play.

References

  1. Cyber.gc.ca report on generative AI polluting information ecosystems (Canadian Centre for Cyber Security)
  2. PISM analysis of disinformation actors using AI (Pism)
  3. World Economic Forum commentary on deepfakes (World Economic Forum)
  4. KAS study on AI-generated disinformation in Europe & Africa (Konrad Adenauer Stiftung)
  5. NATO-cyber summit coverage on AI disinformation (DISA)
  6. AI Disinformation & Security Report 2025 (USA projections) (GlobeNewswire)
  7. Global Disinformation Threats in South America report (GlobeNewswire)
  8. Ukraine-focused hybrid-warfare analysis on AI’s role in Kremlin disinformation (Friedrich Ebert Stiftung Library)
  9. Academic research on automated influence ops using LLMs (arXiv)
  10. Cyber threat intelligence using LLMs (FakeCTI) (arXiv)
Qanon-two

QAnon and Global Conspiracy Movements

Introduction

In the vast, chaotic information landscape of the 21st century, QAnon stands out as one of the most dangerous and bizarre conspiracy theories to ever take root in modern political discourse. What began as a cryptic internet puzzle on an obscure imageboard evolved into a sprawling, almost cult-like ideology that has inspired real-world violence, undermined democratic institutions, and spread across national borders.

QAnon is not just an “American problem.” It is a globalized belief system, mutating to fit the political and cultural anxieties of different societies. The question is not simply what QAnon is, but why it resonates so deeply with millions of people.

2. The Origins of QAnon

QAnon emerged in October 2017 on the anonymous message board 4chan. A user calling themselves “Q” — supposedly a high-level government insider with “Q-level” security clearance — began posting cryptic messages known as “Q drops.” These vague clues claimed to reveal a secret war between President Donald Trump and a global cabal of elite pedophiles, corrupt politicians, and shadowy power brokers.

From the start, QAnon was designed for viral engagement. The Q drops were intentionally ambiguous, encouraging followers to “research” and “connect the dots” themselves. This turned passive consumers into active participants, a classic cult-recruitment tactic dressed up as citizen investigation.

3. The Historical Roots of Conspiracy Thinking

While QAnon feels like a distinctly internet-age phenomenon, its roots are much older.

  • Medieval Blood Libels: The false claim that Jewish communities kidnapped Christian children for ritual purposes echoes eerily in QAnon’s obsession with child-trafficking rings.
  • The Protocols of the Elders of Zion: This early 20th-century antisemitic forgery laid the groundwork for the “global elite conspiracy” trope.
  • The John Birch Society: In the Cold War era, the Birchers pushed narratives of communist infiltration and globalist control that prefigure QAnon rhetoric.

In short, QAnon is a modern remix of ancient prejudices, Cold War paranoia, and millennial internet culture.

4. Ultimate Causes and Reasons Behind QAnon

The explosive growth of QAnon can be traced to a convergence of psychological, cultural, and technological forces:

  • Distrust in Institutions: Years of political scandals, corporate corruption, and government secrecy eroded public faith in mainstream institutions.
  • The Algorithm Effect: Social media platforms reward emotional, sensational content. QAnon’s outrageous claims were perfectly suited for algorithmic amplification.
  • Cultural Fragmentation: As society becomes more polarized, people retreat into ideological echo chambers where conspiracies flourish unchecked.
  • Search for Meaning: In uncertain times, grand narratives offer comfort, purpose, and a sense of control.
  • Authoritarian Populism: QAnon dovetails neatly with populist political movements that cast themselves as defenders of “the people” against “corrupt elites.”

5. Evolution of the QAnon Movement

Initially dismissed as fringe nonsense, QAnon rapidly gained traction during the Trump presidency. Facebook groups swelled to hundreds of thousands of members. Q slogans appeared at political rallies.

In 2020, the COVID-19 pandemic supercharged the movement. With millions stuck at home, fearful and isolated, QAnon’s simplistic “good vs. evil” story provided an intoxicating sense of clarity. Soon, QAnon merged with anti-lockdown protests, anti-vaccine activism, and other fringe causes.

The January 6th Capitol riot revealed QAnon’s real-world danger. Many participants were open believers, convinced they were part of a patriotic revolution to stop a stolen election.

6. Present-Day Manifestations in the United States

Even after Q’s original posts stopped in late 2020, QAnon ideology persisted. Today, it shows up in:

  • School board meetings, where QAnon-adjacent claims fuel panic over “grooming” and “critical race theory.”
  • Local elections, where Q-affiliated candidates run for office.
  • Alternative media ecosystems, from podcasts to YouTube channels, that keep the movement alive without the Q drops.

QAnon has moved from fringe message boards into mainstream conservative politics, reshaping the Republican base and influencing legislation.

7. QAnon’s Global Offshoots

QAnon is no longer just an American export — it has gone international:

  • Germany: Merged with the Reichsbürger movement, which rejects the legitimacy of the modern German state.
  • France: Fused with anti-vaccine activism and anti-Macron sentiment.
  • Japan: A “JAnon” variant incorporates anti-China nationalism and pandemic disinformation.
  • Brazil: Tied to pro-Bolsonaro circles and anti-globalist rhetoric.
  • Australia & New Zealand: Linked with anti-lockdown protests and “sovereign citizen” ideologies.

Each offshoot adapts QAnon’s core mythos to local grievances, proving the malleable and viral nature of the movement.

8. Teachings, Doctrines, and Core Beliefs

While QAnon lacks a formal creed, several recurring doctrines define it:

  • A secret global cabal controls governments, media, and finance.
  • The cabal engages in child trafficking, satanic rituals, and corruption.
  • Donald Trump (or a local political equivalent) is a divinely inspired hero fighting the cabal.
  • A coming “Great Awakening” will expose the cabal, leading to mass arrests and a utopian society.
  • Followers have a sacred duty to “research” and “spread the truth.”

This framework transforms QAnon from a conspiracy theory into a quasi-religion, complete with prophecy, saviors, and apocalyptic visions.

9. Consequences of the QAnon Phenomenon

The harm QAnon causes is both personal and societal:

  • Radicalization and Violence: QAnon believers have been linked to kidnappings, armed standoffs, and terror plots.
  • Family Fragmentation: Loved ones cut ties with members who become consumed by QAnon.
  • Erosion of Democracy: By promoting distrust in elections and governance, QAnon undermines democratic legitimacy.
  • Public Health Risks: Anti-vaccine narratives fueled by QAnon have worsened pandemic outcomes.
  • Global Destabilization: The spread of QAnon to other countries injects instability into fragile political systems.

10. Fighting QAnon and Its Ideological Spread

Countering QAnon requires a multi-pronged strategy:

  • Digital Literacy Education: Teach people how to critically evaluate information sources.
  • Deplatforming Extremism: Social media companies must take consistent action against harmful content.
  • Community Outreach: Support programs to help people exit conspiracy movements.
  • Transparent Governance: Reduce the appeal of conspiracy theories by increasing institutional transparency.
  • Global Cooperation: QAnon is transnational, so responses must be too.

11. Call to Action

QAnon thrives in darkness — in the shadows of ignorance, fear, and division. Every time we scroll past disinformation without challenging it, every time we allow lies to go uncorrected, we help the movement grow.

This is not about silencing political opponents; it is about defending truth itself. If we care about democracy, social stability, and the safety of our communities, we must confront QAnon and its global variants with courage, clarity, and compassion.

Silence is complicity. Engagement is resistance. The time to act is now.

12. References

  1. Belew, Kathleen. Bring the War Home: The White Power Movement and Paramilitary America. Harvard University Press, 2018.
  2. Roose, Kevin. “What Is QAnon, the Viral Pro-Trump Conspiracy Theory?” The New York Times, Updated 2023.
  3. Argentino, Marc-André. “The QAnon Conspiracy Theory: A Security Threat in the Making?” International Centre for Counter-Terrorism, 2021.
  4. Donovan, Joan, and danah boyd. “Stop the Presses? Moving from Strategic Silence to Strategic Amplification in a Networked Media Ecosystem.” American Behavioral Scientist, 2020.
  5. Frenkel, Sheera, et al. An Ugly Truth: Inside Facebook’s Battle for Domination. Harper, 2021.