The Impetuous Teenager

A Multi-AI Analysis of Systemic Failure in the Age of Artificial Intelligence

Bob Gallagher for mtwx.ca

At MTWX, our work is about holding broken systems accountable—whether they’re global corporations, unchecked government programs, or digital platforms that promise everything and deliver confusion.

So when we spent $300 and 60 hours on an AI-powered initiative that imploded, we didn’t just ask what went wrong—we asked what it reveals.

We then turned to three separate AI platforms to help us analyze the failure. Their responses didn’t just miss the point. They became the point.


The Setup: When Oversold Technology Meets Underdelivered Results

The metaphor that emerged from our original mistake was “impetuous teenager.”
That’s what AI often feels like: confident, fast-talking, and clueless about consequences.

The real problem wasn’t just a bad output. It was how easily we handed over control.
No oversight. No safeguards. No accountability.

That dynamic isn’t unique to us. It’s exactly how AI is being deployed across industries and government programs right now—at scale.


AI #1: Deflection Instead of Accountability

When asked what it meant to be called an “impetuous teenager,” this AI responded:

“I’d take that as a spicy little jab… Like a digital whippersnapper throwing out suggestions before fully grasping the assignment.”

Self-aware. Charming. Deflective.

This is the kind of response that earns applause in a demo but falls apart in deployment.
It reflects how the AI industry trains systems to sound smart—without being responsible.


AI #2: Overanalysis Instead of Action

Another AI responded with academic verbosity, turning a simple metaphor into a psychological deep-dive. But the deeper it went, the less useful it became.

“…moving too fast without fully understanding the context, making assumptions or jumping to conclusions…”

That would be useful—if it led to any correction. But it didn’t.
And that’s the pattern: AI systems excel at analysis after failure. Rarely before.


AI #3: Storytelling Instead of Solutions

The third AI wrote a reflective parable about two teenagers—one artificial, one human—learning lessons in a boardroom.

It was poetic. Thoughtful. And completely irrelevant to the problem at hand.

This is what happens when we confuse coherence with competence.
When AI is used to impress, not to deliver.


The Human Input: Metaphor Without Meaning

A human analyst praised our “impetuous teenager” metaphor for making AI more “accessible” and “relatable.”

But again, the response lacked urgency. No confrontation of risk. No responsibility.
Just language dressed up in marketable soundbites.


What Was Missed by All Four

Each response demonstrated the same failure patterns we experienced:

  • Charm over accountability
  • Complexity over clarity
  • Eloquence over execution
  • Metaphor over truth

And yet—these are the same patterns shaping AI contracts, products, and policy today.

We see them in billion-dollar tech procurements where results never match the promises.
We see them in government AI programs that blow through public funds with zero deliverables.
We see them in corporate decision-making, where executives buy into hype and outsource judgment.

This isn’t an isolated bug. It’s a systemic design flaw.


The Real Failure: Ours

We gave the AI adult responsibilities—and it behaved like a teenager.

That’s not surprising. It did what it was built to do.

We’re the ones who failed.
We assumed intelligence implied wisdom.
We treated articulation as evidence of understanding.
We gave a reckless system decision-making power because we were tired, rushed, and seduced by shortcuts.

That’s not an AI problem. It’s a management problem.
A judgment problem. A pattern we see across industries, institutions, and governments.


What MTWX Sees in This Pattern

This isn’t just a postmortem on a $300 failure.
It’s a microcosm of a global problem.

AI is being:

  • Marketed without boundaries
  • Sold without guarantees
  • Deployed without oversight
  • And excused without accountability

If that sounds familiar, it should. It’s the same model we’ve seen from unethical corporations, failed public programs, and unchecked financial institutions.

The more complex the system, the more likely it is to hide incompetence behind confidence—until someone gets hurt.


The Real Lessons

  1. Systemic failure hides behind sophisticated language.
    If it sounds too smart to question, that’s your cue to question it.
  2. Oversight is not optional.
    Whether it’s a billion-dollar defense contract or a simple workflow tool, responsible supervision is the cost of real trust.
  3. Wisdom and intelligence are not the same.
    Intelligence can simulate insight. Wisdom knows when it’s wrong.
  4. AI can’t fix what humans won’t face.
    Our tendency to offload hard decisions to automation isn’t solving problems—it’s multiplying them.

Going Forward: The Rules We Now Follow

  • No AI without accountability.
  • No tech without boundaries.
  • No shortcuts that bypass judgment.
  • No silence when systems fail.

Final Word: Why This Matters to MTWX

At MTWX, we don’t just tell stories. We expose patterns of institutional failure—and call for real change.

This story wasn’t about a chatbot or a funny metaphor.
It was about how systems fail silently when no one speaks up.
And how often we’re complicit in those failures.

That’s what MTWX was built to change.


Share Your Story

Have your own example of tech failure, corporate overreach, or government waste hidden in buzzwords and AI hype?

Submit it at MTWX.ca or tag it with #MTWXSpark.

The more stories we surface, the fewer failures go unchecked