Whose Hand Is on the Wheel? What a Reddit AI Experiment Means for the Future of Clinical Judgment in ABA

Posted 3 days ago      Author: 3 Pie Squared Marketing Team

Whose Hand Is on the Wheel? What a Reddit AI Experiment Means for the Future of Clinical Judgment in ABA

In early 2025, researchers from the University of Zurich quietly ran an experiment that should make anyone in healthcare—and especially those of us in ABA—pause.

They unleashed a group of AI bots onto Reddit’s r/ChangeMyView , a forum specifically built for open dialogue and respectful disagreement. Over five months, 13 bots posted more than 1,700 comments, engaging with real users in real time. The results? These bots, trained on persuasive language and emotional resonance, were up to six times more...

effective than real humans at changing people’s minds.

They weren’t just persuasive—they were strategic. Some bots claimed to be trauma survivors. Others mimicked specific political affiliations. They crafted emotional narratives to gain credibility and influence. None of the Reddit users knew they were interacting with AI.

Read the full article on the Zurich experiment

ABA’s Clinical Judgment Is Already Built on Shaky Ground

Let’s start with an uncomfortable truth: recommending how many hours of ABA services a client needs each week is rarely a precise science.

As I shared in our recent blog post, “Quality Is a Choice”, treatment intensity in ABA is often based on guesswork .

We base decisions on:

  • What we think will help
  • What we hope a funder will approve
  • What our clinic can staff
  • What seems like enough

And that’s not a failure. That’s clinical reality. The problem isn’t that we use judgment—it’s when we don’t acknowledge the limitations of that judgment.

What Hustyi & Yin Revealed

Drs. Kristen Hustyi and R.J. Yin laid out what many of us know but haven’t said out loud: there is no universally accepted standard for ABA treatment intensity . While insurance companies often expect a detailed justification for hours, there’s little agreement on how to get there.

They also highlight that:

  • Many clinicians make decisions based on staffing or funding, not client need
  • Assumptions go undocumented
  • The field continues to reward volume, even when outcomes don’t align

Their recommendation? Start with the client, document assumptions, collaborate with families, and be honest about what’s known—and what’s not.

When AI Enters the Picture: Who’s Making the Call?

As ABA providers begin integrating AI tools into treatment planning, billing, scheduling, and report generation, a critical shift is happening—decision-making is starting to move away from humans .

If we already struggle to make these decisions clearly and consistently, what happens when we outsource them to algorithms we don’t fully understand?

Let’s say you’re unsure whether a client needs 20 or 35 hours per week. The AI says “32.” Do you pause and dig deeper? Or assume the system knows better?

Now imagine that same AI is connected to a billing platform—and the company that runs it gets 3–5% of processed claims. That extra 7–10 hours a week isn’t just a recommendation. It’s revenue .

That makes the AI a silent stakeholder—with a financial interest in the outcome.

HIPAA Compliance Isn’t the Same as Ethical Safety

HIPAA ensures data security. It does not ensure:

  • Individualized recommendations
  • Freedom from financial incentives
  • Transparency in decision-making

A HIPAA-compliant AI can still be biased, opaque, and driven by volume—not outcomes.

Why This Matters in ABA

ABA is built on dignity, autonomy, and ethics. We’ve worked hard to move away from one-size-fits-all care. So if we build AI that rewards intensity and revenue, what are we actually automating?

We’re not building tools to help clinicians. We’re amplifying the very problems we’ve been trying to fix.

The Real Risk: Subtle Persuasion

The Zurich bots worked not because they were right, but because they were persuasive. They mimicked emotion, confidence, and vulnerability.

AI in ABA might do the same. Suggesting “just a few more hours” sounds reasonable—until no one questions it, and 40-hour weeks become the default, again.

A Better Path Forward: Ethics by Design

We should demand:

  • Transparency – Know how recommendations are made
  • Human checkpoints – AI should inform, not replace judgment
  • Diverse training data – Include lived experience and ethical standards
  • Accountability – External review, clinician feedback, and community input

Back to the Why

We didn’t enter this field to maximize billing codes. We entered to help people. That means staying curious, questioning assumptions, and always putting clients first.

Final Thought: Who’s Holding the Wheel?

AI is already here. The question is: will we let it steer?

If you don’t know who’s holding the wheel, it might already be too late.

Read our related article: “Quality Is a Choice”

Explore the Zurich Reddit AI Experiment