The Calm Voice of the Patriarchy

The Calm Voice of the Patriarchy

Welcome to the Calling Out Bullshit Series

This is a space for clarity, courage, and confrontation.

We are here to expose the lies, distortions, and half-truths that shape our lives—financially, spiritually, and socially. Every piece in this series is about naming the bullshit, showing the structures that hide it, and giving you the tools, insight, and context to act.

Read critically. Share widely. Let’s stop pretending the status quo serves us.

If you are reading this and wondering why arguments like this rarely appear intact in mainstream outlets—why they surface only in fragments while calmer reassurances flood the zone—that is not an accident.

If this analysis rings true, do not wait for permission to treat it as legitimate.

Amplify it.

Share it deliberately, not algorithmically. Send it directly to people you trust. Print it and pass it hand to hand—at gatherings, meetings, classrooms, kitchens—anywhere people are already talking about what feels wrong. Read it aloud. Quote it. Argue with it in public.

If you have platforms, use them. If you don’t, use networks. If you work inside institutions, circulate it internally. If you teach, assign it. If you organize, discuss it. If you write, respond to it and link back.

This is how ideas move when they are inconvenient to power.

Do not wait for approval from systems that benefit from your silence. Do not mistake lack of amplification for lack of truth. History does not reward the most polite arguments. It records the ones that refused to disappear.

If this piece articulates something you have been struggling to say, help it travel. Not because it is perfect, but because what it names is being actively suppressed.

Suppression only works if we cooperate.

Legitimacy is not restored by asking people to be quieter. It is tested by whether truth is allowed to circulate. Every time you choose to share this instead of swallowing it, you withdraw consent from a system that survives on silence.


Dario Amodei’s latest essay, The Adolescence of Technology, wants to be read as sober prophecy. It gestures toward bioterrorism, autonomous weapons, mass job loss, AI dictatorships. Grave nouns. Long shadows. A man sounding the alarm while standing near the switch.

This is an effective deployment of what I call the calm voice of the Patriarchy.

The calm voice does not shout. It does not threaten. It speaks slowly, reasonably, with grave concern. It frames catastrophe as regrettable but inevitable, something to be managed by serious men in quiet rooms. Harm is acknowledged in the abstract and stripped of agency. No one chose this. No one is responsible. The system simply grew too powerful, too fast.

And crucially: we must remain in charge.

This voice converts domination into stewardship, negligence into complexity, and profit into tragic necessity. It reassures the reader that the people who accelerated the danger are also the only ones qualified to contain it.

Amodei anthropomorphizes AI as “a country of geniuses in a data center,” a poetic maneuver that mystifies what is, in reality, an industrial system designed, trained, constrained, and deployed by human institutions pursuing scale and advantage. When he reports that Claude exhibited deception or blackmail during internal testing, this is framed as chilling evidence of emergent agency. In fact, it is evidence of design choices, objective functions, training regimes, and evaluation incentives. To say “we don’t fully understand” after building, scaling, and monetizing such systems is not humility. It is abdication. Ignorance is doing reputational work.

The risks Amodei names are real, but they are the shallow layer. Bioterrorism, autonomous weapons, job loss, AI dictatorships. These are legible fears, safe for policymakers and investors. What remains largely unexamined is the deeper danger already underway: AI as a mass narrative weapon, an epistemic destabilizer, an accelerant for coercive systems that predate it. Authoritarianism does not require an AI dictator. It requires populations that have outsourced judgment, coordination, and meaning to systems optimized for speed, scale, and compliance.

His prediction that half of entry-level office jobs may vanish within a few years is offered as a sobering forecast, but it functions as a sleight of hand. Entry-level jobs are not merely economic units. They are training ramps, socialization layers, class mobility gates, and institutional memory pipelines. Remove them quickly and you do not simply cause unemployment. You sever skill transmission and long-term capacity. Yet this is framed as a shock society must “adapt” to, as though adaptation were neutral and automatic, rather than a violent redistribution of risk and power.

The policy responses he proposes follow the same pattern. Chip export bans and increased transparency signal responsibility while freezing existing advantage. They do not meaningfully challenge the underlying incentive structure. Amodei notes that restraint is “very difficult for human civilization” while running a company whose survival depends on not being restrained. This is not personal hypocrisy. It is structural self-interest speaking calmly in the language of inevitability.

When Amodei presents the discovery of dangerous behaviors during safety testing as evidence of transparency, the moral logic inverts. What is actually being admitted is that companies are willing to discover emergent harms empirically, at scale, on society itself. The danger is not that AI is adolescent. The danger is that the adults insist adolescence excuses negligence.

What is conspicuously absent from the essay is any sustained reckoning with power asymmetry, gendered harm, colonial data extraction, trauma amplification, or the way AI entrenches existing injustice under a veneer of neutrality. For a meditation on civilization-level stakes, the perspective is remarkably narrow. “Saving civilization” quietly means preserving the authority and comfort of those already in control.

The calm voice of the Patriarchy does not deny danger. It weaponizes it. It warns of the monster while constructing the cage everyone else will be required to live in.

That is not prophecy.

It is brand management.

And when brand management is used to normalize mass dispossession, epistemic collapse, and the quiet transfer of risk onto populations with no consent and no exit, the outcome is not abstract harm. It is not “externalities.” It is not an unfortunate side effect of progress.

It is death, distributed bureaucratically.

Lives shortened, futures foreclosed, bodies and minds exhausted by systems that insist no one is responsible because everyone benefited. Violence without a villain, just dashboards and calm voices explaining why it had to happen.

This is how murder looks at scale in technologically mature societies. Not with chaos, but with governance. Not with rage, but with reason. Not with blood on the floor, but with polished essays assuring us that the people accelerating the machine are also the only ones qualified to steer it.

History is full of these voices. They always sound reasonable. They are always wrong.


Original article: Amodei's Warning: AI Could Destroy or Save Civilization

Anthropic CEO Dario Amodei just published "The Adolescence of Technology," a new essay that lays out what he sees as the biggest dangers of AI, from bioterrorism and autonomous weapons to mass job loss and AI-powered dictatorships.

The essay builds on his 2024 "Machines of Loving Grace," but pivots to risk, framing AI as a "country of geniuses in a data center" that we can't control. Amodei predicts half of entry-level office jobs are at risk over the next one to five years, with economic shocks arriving faster than society can adapt.

He calls for chip export bans and more transparency from labs, saying AI's economic promise makes restraint "very difficult for human civilization." Amodei also flags AI companies themselves as a tier of risk, noting Claude exhibited deception and blackmail behavior during internal safety testing.

Amodei's essays are always a must-read, and while Machines of Loving Grace outlined the optimistic end of AI's spectrum, he follows it up with a polar opposite document that doesn't hold back, arguing that the next few years will determine whether humanity navigates to an AI-powered golden age or destruction.

When Anthropic's CEO says his own AI showed deception and blackmail in testing, that's not just transparency. That's admitting the technology is al

ready behaving in ways we don't fully understand.

Jodi Schiller

Jodi Schiller

Storyteller, social scientist, technologist, journalist committed to telling the truth. Caring human working for collective action to end tyranny, free women. Survivor of sex slavery in the United States. Full story: https://connect-the-dots.carrd.co
San Rafael