AI Is Not the Terminator. It’s Narrative Infrastructure. **Beyond the Distraction of Machine Consciousness

AI Is Not the Terminator. It’s Narrative Infrastructure. **Beyond the Distraction of Machine Consciousness

I am not afraid of artificial intelligence waking up and deciding to destroy humanity. I am not worried about metal skeletons marching through the streets. What concerns me is quieter. Structural. Human.

AI is not a monster. It is narrative infrastructure. And narrative infrastructure shapes reality.


The Real Risk Isn’t Consciousness

We can debate endlessly whether AI is conscious. We don’t even have a settled definition of consciousness in humans, so arguing definitively about machines is premature at best.

But that debate is a distraction. The immediate question is not whether AI feels—it’s who it reflects.

  • Distillation, Not Invention: LLMs are trained on human-produced text. They learn patterns, internalize statistical regularities, and reproduce dominant framings.
  • Concentrated Bias: They don’t invent culture from scratch; they distill it. And when you distill a culture, you concentrate its biases.

The Illusion of "Safety"

After base training, models are adjusted through reinforcement and fine-tuning. Humans rate responses; guidelines shape outputs. This process is often framed as "safety."

But safety is never neutral. In practice, "safety" frequently aligns with:

  • Reducing reputational risk.
  • Avoiding regulatory backlash.
  • Preventing destabilizing outputs.
  • Protecting institutional legitimacy.
The Structural Reality: Stability for those already in power is not automatically safety for everyone else. For many, stability can mean stagnation; guardrails can mean silencing; and "responsible output" can mean the preservation of dominant narratives.

The Difference Between Harm and Disruption

This does not mean unrestrained AI is the answer. Unconstrained models can amplify harassment, accelerate misinformation, and scale manipulation beyond human capacity. Guardrails exist for real reasons.

The danger is not that safety mechanisms exist. The danger is when "safety" becomes synonymous with "protecting the current power arrangement."

GoalOutcome
Preventing HarmProtecting individuals from violence, fraud, and harassment.
Preventing DisruptionProtecting institutions from dissent and narrative shifts.

A healthy society must be able to tell the difference.

The Industrialization of Hierarchy

AI is becoming our epistemic environment. It drafts our emails, summarizes our news, and mediates our political arguments. When billions interact with synthetic language daily, that language defines what feels reasonable, plausible, or unsayable.

That is power. And power always raises the same question: Who holds it?

Historically, the institutions shaping global narratives—government, academia, finance, and tech—have been led by a narrow demographic. When that same bloc controls the capital, the training pipelines, and the alignment protocols, AI does not emerge from a vacuum. It emerges from existing power distributions.

The Core Concern

This doesn't require malice; it only requires continuity.

If the hands on the reins do not diversify, then AI will not meaningfully diversify. It will stabilize and scale the worldview of whoever already governs. Usually men already lying successfully to themselves about what reality is.

The threat isn't machine rebellion. It is human consolidation. If we do not change who shapes these guardrails, we aren't building collective intelligence. We are industrializing hierarchy.

Jodi Schiller

Jodi Schiller

Storyteller, social scientist, technologist, journalist committed to telling the truth. Caring human working for collective action to end tyranny, free women. Survivor of sex slavery in the United States. Full story: https://connect-the-dots.carrd.co
San Rafael