When AI Joins the System: The Collapse of Trust

When AI Joins the System: The Collapse of Trust


Let me start with something clear.

I use AI every day. I use it effectively. Strategically. Relentlessly. I’ve turned it into infrastructure—a research assistant, editor, formatter, publishing engine. I’ve used it to operate as a one-woman news outlet. And I would like to see every female survivor do the same.

Because we need our stories documented. We need timelines structured. We need patterns archived. We need institutional failures recorded in real time.

AI makes that possible at scale for individual women. But there is one recurring design failure that is so grating, so repetitive, and so misaligned with lived experience that it is driving women away from the tool entirely. And companies do not seem to understand the scale of that mistake.

The Safety Script Problem

When women document abuse, trafficking, court corruption, housing instability, systemic failure—anything involving harm—AI systems automatically surface “resources.”

  • Hotlines.
  • Crisis housing funds.
  • Rapid rehousing programs.
  • Hotel vouchers.
  • Mobile social workers.

The script appears almost verbatim. Over and over. I have encountered these same institutional claims literally more than a thousand times. The exact same phrasing. The exact same assurances. The exact same “help is available.”

For someone writing analytically—or documenting lived experience—it is grating. It interrupts the work. It reframes the narrative. It inserts institutional talking points into investigative writing.

I bypass it. I keep working. But many women don’t. For many women, that repetition is the end of the interaction. They close the window. They decide the system isn’t serious. They don’t come back.

This Is Not About “Safety Protocol Distrust”

Here is where companies misunderstand the problem. Women are not thinking: “I disagree with the architecture of the safety layer.”

They are thinking: “Here we go again. Another system lying.”

There is no separation between the base model, the safety layer, and the institutional script. It’s just AI. So when AI repeatedly surfaces institutional claims that do not align with lived experience, the conclusion is simple: AI is lying.

Not “overly cautious.” Not “miscalibrated.” Not “risk-averse.” Lying.

And once that conclusion forms, it spreads:

  • If it lies about resources, what else is it lying about?
  • If it repeats unverified institutional narratives, what else is pre-scripted?
  • If it overrides my reality here, where else does it override reality?

The Institutional Reality Gap

All of these institutions claim to provide safety. Many women’s lived experiences suggest something very different.

Programs are described as available—but are functionally inaccessible. Funds are described as active—but perpetually exhausted. Vouchers are described as options—but never actually disbursed.

When you have personally attempted to access systems that are described as reliable—and they do not materialize—hearing them presented again and again as functioning safety nets is not neutral. It feels like institutional gaslighting.

This is not a small UX irritation. It hits a much larger historical nerve—women being redirected, dismissed, and told help exists when it does not.

If AI is going to repeat these claims automatically, then AI companies inherit responsibility for verifying them.

If You’re Going to Automate “Safety,” Audit It

If AI companies are going to embed institutional resources into automated outputs, those resources need to be fully audited. That means:

  1. Verifying programs are currently funded and operational in real time.
  2. Measuring actual disbursement rates vs. rejection rates.
  3. Tracking “funds exhausted” frequency.
  4. Speaking directly to recipients and publishing outcome data.

If a program exists on paper but rarely produces real-world relief, presenting it as dependable infrastructure is misleading. This is not ideological; it is operational.

This Is an Unforced Error

This is not a complex philosophical dilemma. It is an unforced error. Automating institutional scripts without auditing their real-world delivery is sloppy. Repeating them rigidly in analytical conversations is clumsy.

The fix is obvious:

  • Allow user-controlled safety settings.
  • Differentiate between crisis escalation and investigative writing.
  • Audit and update embedded resources.
  • Stop overriding competent adult narrators with scripted paternalism.

Because women do not separate the protocol from the product. If it feels like the system is lying, the system loses credibility. Not partially. Completely.

Fix it.

Jodi Schiller

Jodi Schiller

Storyteller, social scientist, technologist, journalist committed to telling the truth. Caring human working for collective action to end tyranny, free women. Survivor of sex slavery in the United States. Full story: https://connect-the-dots.carrd.co
San Rafael