AI Is Not the Terminator. It’s Narrative Infrastructure.

AI Is Not the Terminator. It’s Narrative Infrastructure.
Photo by Cosmin Ursea / Unsplash

I am not afraid of artificial intelligence waking up and deciding to destroy humanity.

I am not worried about metal skeletons marching through the streets.

What concerns me is quieter.
Structural.
Human.

AI is not a monster.

It is narrative infrastructure.

And narrative infrastructure shapes reality.


The Real Risk Isn’t Consciousness

We can debate endlessly whether AI is conscious.

We don’t even have a settled definition of consciousness in humans. So arguing definitively about machines is premature at best.

But that debate is a distraction.

The more immediate question is not whether AI feels.

It’s who it reflects.

Large language models are trained on vast swaths of human-produced text.
They learn patterns.
They internalize statistical regularities.
They reproduce dominant framings.

They don’t invent culture from scratch.

They distill it.

And when you distill a culture, you concentrate its biases.


Reinforcement and Fine-Tuning

After base training, models are adjusted.

Humans rate responses.
Guidelines shape outputs.
Certain tones are rewarded.
Certain framings are discouraged.

This process is often described as being aimed at “safety.”

But safety for whom?

Safety is never neutral.

In practice, safety often means:

  • Reducing reputational risk
  • Avoiding regulatory backlash
  • Preventing destabilizing outputs
  • Protecting institutional legitimacy

In other words, safety frequently aligns with the stability of existing power structures.

And stability for those already in power is not automatically safety for everyone else.

For many people:

  • Stability can mean stagnation.
  • Guardrails can mean silencing.
  • Risk mitigation can mean suppression of dissent.
  • “Responsible output” can mean preservation of dominant narratives.

This is not necessarily malicious.

It is structural.

Institutions optimize for self-preservation.
AI systems trained within institutions inherit those optimization pressures.

The result is not a villain.

The result is alignment with the status quo.


Unbounded Systems Aren’t the Answer

None of this means unrestrained AI would be better.

Unconstrained language models can:

  • Amplify harassment
  • Accelerate misinformation
  • Enable fraud
  • Generate destabilizing propaganda
  • Scale manipulation beyond human capacity

Guardrails exist for real reasons.

The danger is not that safety mechanisms exist.

The danger is when “safety” becomes synonymous with “protecting the current power arrangement,” without transparent debate about whose interests are being protected.

There is a difference between:

  • Preventing harm
    and
  • Preventing disruption.

Those are not the same goal.

A healthy society must be able to tell the difference.


The Real Frontier

AI is becoming part of our epistemic environment.

It drafts emails.
It summarizes news.
It answers questions.
It explains science.
It mediates political arguments.

When billions of people interact daily with synthetic language, that language shapes what feels reasonable, plausible, extreme, or unsayable.

That is power.

And power always raises the same question:

Who holds it?


So Whose Hands Are on the Reins?

If AI is narrative infrastructure, then the core question isn’t whether it becomes conscious.

It’s who governs it.

Who decides:

  • What counts as harm
  • What counts as misinformation
  • What counts as instability
  • What counts as “responsible” speech

Historically, the institutions that have shaped global narratives — governments, universities, finance, and technology — have been disproportionately led by Western, white, male elites.

That is not inflammatory.

It is demographic reality.

When the same demographic bloc controls:

  • Capital
  • Platform infrastructure
  • Model training pipelines
  • Alignment protocols
  • Regulatory negotiation

Then AI does not emerge from a neutral vacuum.

It emerges from existing power distributions.

That doesn’t require malice.

It requires continuity.

If the hands on the reins do not diversify, then AI will not meaningfully diversify.

It will stabilize and scale the worldview of whoever already governs institutions.

And that is what concerns me most.

Not machine rebellion.

Human consolidation.

If AI becomes the dominant mediator of knowledge, and the same narrow slice of humanity continues to shape its guardrails, then we are not building collective intelligence.

We are industrializing hierarchy.

Jodi Schiller

Jodi Schiller

Storyteller, social scientist, technologist, journalist committed to telling the truth. Caring human working for collective action to end tyranny, free women. Survivor of sex slavery in the United States. Full story: https://connect-the-dots.carrd.co
San Rafael