The American People Fact-Checked Their Government
The response to the killing of Alex Pretti shows the internet at its best.
On October 17, 1961, tens of thousands of Algerians marched through the streets of Paris in peaceful defiance of a discriminatory curfew imposed by the French state. Police opened fire, beat protesters, arrested them en masse—and, in some cases, threw people into the Seine, where they drowned. Historians later called it “the bloodiest act of state repression of street protest in Western Europe in modern history.” At least 48—but possibly hundreds—were killed.
Yet for decades, the official story minimized the violence. The death toll, it was claimed, was three. Police had acted to defend themselves. The protesters were terrorists.
The French state actively buried the truth. Records were falsified. Evidence suppressed. Investigations blocked. Publications seized. The paper trail was shaped to match the story.
In 1999 the French Public Prosecutor’s Office concluded that a massacre had taken place, but only in 2012 did President Hollande acknowledge it on behalf of the French Republic. This is the danger of a public sphere without a distributed capacity to challenge official accounts in real time: It is difficult to imagine that the events of October 17 could have been hidden for so long if thousands of protesters and bystanders had carried smartphones, livestreamed the crackdown, and uploaded footage as the bodies hit the water.
Paris 1961 is a historical warning. Minneapolis 2026 is its modern counterpoint.
Within hours of the killing of Alex Pretti by federal immigration agents on January 24, top officials attempted to shape the narrative. They placed the blame squarely on the victim, with Secretary of Homeland Security Kristi Noem claiming that Pretti “approached” ICE officers with a gun and was killed after he “violently resisted” attempts to disarm him. White House Senior Advisor Stephen Miller called Pretti “an assassin” who “tried to murder federal agents.” FBI Director Kash Patel said, “You don’t have a right to break the law and incite violence.”
In other words, Pretti supposedly posed a threat and paid the price.
But something happened that couldn’t have happened in France in 1961. As bystander footage spread across social media, the official narrative began to collapse. Videos appeared to show a cellphone in one of Pretti’s hands and no gun in the other. Officers also appeared to remove his holstered gun—legally carried—before he was shot several times. It then emerged that Pretti was an ICU nurse with no criminal record—hardly the prototype of a terrorist.
The official account was clearly at odds with the best available evidence. Four days after the shooting, the Trump administration is already scrambling to save face, cast blame, and “de-escalate” the ICE presence in Minnesota.
The current obsession with misinformation tends to focus on the public: online mobs, foreign influencers, flaming trolls. But history suggests a more inconvenient truth: in times of crisis, disinformation often comes from above. Governments, including democratic ones, have powerful incentives to shape information. When a state agent shoots a citizen, the response is rarely “Let’s expose ourselves to transparency.” It is often the opposite: to control the narrative, limit scrutiny, discourage dissent, and frame the event in morally legitimizing terms.
What should our response look like? The Pretti case offered an answer—not only through the videos, but through something else that happened almost simultaneously: the public correction of powerful figures, at scale. Within hours the statements by Miller, Noem, and Patel—and even the official @DHSgov account—had all received Community Notes on X, a platform that, ironically, has become increasingly central to the populist right and is owned by Trump ally Elon Musk.
This is where social media performs a civic function.
When platforms label content as “false” in a top-down fashion, many users interpret it as bias—“truth policing” by corporate gatekeepers in cahoots with governments. But the Community Notes system is different. It is crowdsourced, asking volunteers to add context and sources to misleading posts. An open-source algorithm decides which notes become visible, and, crucially, prioritizes notes that gain support from users with different political perspectives. The point is not unanimity—it’s cross-ideological agreement sufficient to clear a threshold of credibility.
This is what makes bottom-up correction hard to dismiss as partisan censorship. It involves a distributed group of users reaching a form of consensus, often by pointing to credible reporting. It can create a positive feedback loop: journalism supplies verifiable facts; the crowd amplifies and contextualizes them; the overall information environment becomes more resilient.
Early research into the impact of crowdsourcing is promising. Studies have found high accuracy rates for Community Notes in specific domains like COVID-19 content, and a significant share of notes cite high-quality sources.
More broadly, crowdsourced fact checking reflects an important principle: when trust in elite institutions collapses, a purely expert-driven model may fail or even backfire. Politically diverse crowds can sometimes do what “authoritative” gatekeepers cannot: persuade skeptics that a correction is legitimate.
Crowdsourcing is not a silver bullet. The search for a single, decisive fix for disinformation is a “modern mirage” that often serves as a pretext for giving authorities new powers they will inevitably abuse. But the promise of crowdsourcing suggests we should bet on pluralism: multiple, overlapping checks that strengthen the public’s ability to verify claims without empowering any single institution—especially the state—to control the boundaries of permissible speech. The mainstreaming of crowdsourced fact-checking across social media platforms should function as a disincentive to brazen lying by politicians and political influencers.
In Paris in 1961, the state could suppress evidence, control archives, intimidate media, and deflect until public attention faded. In Minneapolis in 2026, video evidence traveled faster than the official storyline—and distributed networks of verification made it harder for powerful figures to rewrite reality without pushback.
This is what a free society should aim for: not a perfect public sphere without falsehoods (which has never existed), but a public sphere with enough openness, transparency, and decentralized checking power to ensure that lies—especially from the top—cannot become the permanent record.
Jacob Mchangama is the Executive Director of The Future of Free Speech and a research professor at Vanderbilt University. He is also a Senior Fellow at FIRE and the author with Jeff Kosseff of The Future of Free Speech: Reversing the Global Decline of Democracy’s Most Essential Freedom (forthcoming 2026).
Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:






