📢
A recurring feature

Soapbox Moments

Hear ye, hear ye.

Unfiltered perspectives on AI, discipline, methodology, and the human condition. Sometimes directly relevant. Sometimes a mental detour. Always honest.

A Soapbox Moment™ is a mental break. Between the technical content, the frameworks, and the methodology, there are moments where something clicks — or something bothers me — and I need to say it out loud.

These are those moments. Some connect directly to AI governance. Some are life lessons that found their way here. Some are both at once.

I'm standing on a weathered wooden soapbox in an imaginary town square. You didn't have to stop and listen. I appreciate that you did.

5 Moments  ·  January – April 2026
Moment #001 January 5, 2026 Life Lessons

No Deal Over $10 Difference

"Son, that is not the point. I set a price I was willing to pay and they did not meet it."

My mom took me shopping for a new car for college. She and the salesperson went back and forth. He came back with his best offer. She said, "Let's go, son." As we were leaving, the salesperson asked what price she had in mind. She told him — and it was a $10 difference.

I told her I'd pay the $10. She didn't budge.

Being an only brat, she finally gave in and I got my new car. But I've always wondered if the price was too high.

Not the dollar amount — the principle. Did I learn the wrong lesson that day? That persistence beats principles? That boundaries are negotiable if you push hard enough?

What She Was Really Teaching

It's not always about the dollars and cents. She set a boundary. She communicated it clearly. When it wasn't met, she was prepared to walk. The $10 wasn't the issue — respect for the boundary was the issue.

The salesperson saw $10 as trivial. My mom saw it as a test of whether they would meet her terms. They failed the test.

Connection to Disciplined AI™

This is why Rule Zero matters: Preserve Thy Humanity™. When we let AI — or anyone — erode our boundaries by small increments ("just this once," "it's only a small thing," "what's the harm?") we're paying more than $10. We're paying with our principles.

The line exists for a reason. Hold it.

"Set your price. Mean it. Be willing to walk."— Steve Watson, 2026-01-05

Moment #002 January 24, 2026 Methodology AI

A Formula By Any Other Name

"Math without behavioral awareness is just fiction with decimals."

Years ago, I was mentoring someone who claimed to be using a PMI algorithm to calculate the most likely project path. He quoted PERT. The formula was correct enough. But I asked the question that actually matters:

"How did you get your estimates?"

They had simply sat down as a group and guessed — pessimistic, most likely, happy-path. No historical data. No empirical evidence. Feeding three guesses into a PERT formula doesn't produce a reliable schedule. It produces an incorrect schedule that looks legitimate because it came from math.

The Human Problem PERT Ignores

PERT also assumes rational human behavior: people start work immediately, work steadily, finish early when possible, and report progress honestly. Reality doesn't work that way. Give someone a week and they'll take the week. Give them three days and they might finish — and if they don't, you still have two days to recover.

PERT has no answer for student syndrome, Parkinson's Law, hidden task padding, or multitasking. PERT tries to predict the future. Critical Chain manages reality.

The Same Pattern in AI

People say: "I'll use ChatGPT/Claude/Gemini. I'll write better prompts. I'll use the latest model." They trust the output because it came from a tool. But tools amplify methodology:

Good methodology + tool = Amplified success.
No methodology + tool = Amplified chaos.
Bad methodology + tool = Confident failure.

AI governance isn't about having the perfect prompt formula. It's about managing reality, building systems that work with human behavior, and designing recoverability into the process.

"AI without methodology is just expensive vibe coding."— Steve Watson, 2026-01-24

Moment #003 February 10, 2026 AI

AI & Copyright — Arguing from the Wrong Perspective

"Everyone is arguing about protection. Nobody is arguing about enablement."

A podcast about AI in court. Everyone arguing about whether training on copyrighted works violated fair use. Lawyers debating. Companies defending. One argument stuck: if you feed Lord of the Rings into a model and produce derivatives, you decrease the value of the original. Fair enough.

But the whole conversation was one-sided. Every voice was asking: "How do we protect what already exists?"

Nobody was asking: "How do we enable what doesn't exist yet?"

The Parent-Child Problem

Every human artist who ever lived learned by consuming other people's work. Tolkien read Norse mythology, Beowulf, and carried the trenches of WWI into Middle-earth. Nobody sued him for "training on" the Edda. A painter studies Monet. A musician grows up on Miles Davis. They absorb, they transform, and they create something new.

We don't call that copyright infringement. We call it learning. AI is doing the same thing through a different mechanism.

If a parent teaches a child everything they know, and the child goes on to create something remarkable — does the parent own the child's work? Of course not.

The Real Fear

Strip away the legal arguments and what's left? People aren't really afraid of copyright infringement. They're afraid of becoming unnecessary. But AI has a long way to go. Even in coding, you still have to tell it what you want. AI can generate. It can't yet feel.

The Answer Isn't Prohibition — It's Provenance

The question shouldn't be "how do we stop AI from using copyrighted material." The question should be: "How do we build a system where everyone wins?"

Original creators get attribution and compensation. New creators get access to the full breadth of human knowledge. AI platforms get clear rules. The public gets new art and new possibilities. You don't solve copyright with walls. You solve it with transparency.

If every AI-assisted creation carries a transparent chain of custody showing what influenced it, you have the infrastructure for fair attribution. You can trace the lineage. You can compensate the sources. This is why PROVENANCE exists in the CORVAI™ framework. This is why the audit trail matters.

"We need everyone to WIN — not just the people who already won."— Steve Watson, 2026-02-10

Moment #004 February 28, 2026 Methodology

Nothing Gets Done Without a Plan

"Nothing gets done without a plan. And nothing ever follows the plan."

Both halves are true simultaneously. Most people only internalize one or the other.

The people who only hear the first half become rigid planners who crumble when reality deviates. The people who only hear the second half use it as an excuse to never plan at all. "Why bother? It'll change anyway." They drift from impulse to impulse and wonder why nothing ships.

The Wisdom Is Holding Both

Plan rigorously. Adapt gracefully.

A plan is not a promise. It is a measuring stick. Without it, you cannot distinguish conscious adaptation from accidental drift. With it, every deviation is a decision — visible, traceable, and honest. This is what a WBS gives you. This is what a schedule gives you. Not certainty. Awareness.

The Companion Moment

This one came up after a deadline slipped on the Vibe Coding white paper — for the right reasons. I extended the deadline, built proper buffers, protected what mattered. Classic Critical Chain thinking.

But I needed the mentor's words to remind me why planning still mattered even after the plan broke.

"The schedule didn't fail. The plan was never complete enough to succeed."— Steve Watson, 2026-02-28

Moment #005 March 2026 Project Management AI

PRINCE2, PMI, and the Case for a Global BOK

"Someone has to define the common language. And that convergence paper may be the most important PM thought leadership of the next decade."

While researching the competitive landscape for AI in project management education, the conversation turned to PRINCE2. The observation: PRINCE2 may actually be a better process framework than the PMI approach — more prescriptive, more practical, more naturally suited to a disciplined AI-assisted workflow.

The squirrel got me before I finished the certification. But the thinking stuck.

Why PRINCE2 Maps to AI Governance

PRINCE2's seven principles are essentially guardrails for keeping the human in charge: continued business justification, learn from experience, defined roles, manage by stages, manage by exception, focus on products, tailor to suit. AI can serve every one of these — continuously validating whether the project still makes sense, tracking deliverable quality, escalating only when thresholds are crossed.

Sound familiar? That's the whole gap the AI-in-PM training market is missing.

The Bigger Thought

One day, there will be a global BOK for project management. The current fragmentation — PMI, PRINCE2, IPMA, APM, agile frameworks — is not sustainable in an AI-assisted world. AI tools need a common language to operate across methodologies.

What if we took the best of PRINCE2 and wove it into an AI-driven PM process — without kicking PMI's dog? Not to replace either framework. To build something that transcends both.

That convergence paper is worth writing. It may be a future Disciplined AI Series publication.

"PMI teaches you what project management is. PRINCE2 teaches you how to actually do it. You need both."— Steve Watson, March 2026

Got a moment of your own?

Soapbox Moments™ are born from real experience — a meeting that went sideways, a mentor's offhand comment, a podcast that got something wrong. If something's been bothering you about AI, discipline, or the way things work, I'd like to hear it.

I read every message. I’ll respond within one business day.