}

Judgement, not Method

Judgement, not Method
Print works at Beamish Open Air Museum. Photo by Maggie Stephens, CC-BY 2.0.

Most of what I write or build now has AI in it somewhere:

  • As a sparring partner at midnight
  • As a draft I'll spend the next hour reshaping
  • As a thesaurus, a fact-checker, a second pair of eyes on a paragraph that isn't landing.

The conversation about this has hardened into two postures, and neither fits how the work happens.

Two camps make most of the noise.

  • One says: I used AI, therefore fast, therefore good, therefore ship
  • The other says: I used AI, therefore suspect, therefore confess.

Both are wrong in the same way. Whether AI was involved tells you nothing about whether the work is any good.

What I look for in any piece of work is judgment: someone choosing, rejecting, revising, willing to be on the hook for what shipped. That question predates AI and will outlast it.


Start with the hype side, which is easier. There's a whole genre now of prompt-and-ship work. People burn through hundreds of thousands millions of tokens generating a "100-page guide" and treat the generation as the achievement. The internet fills with content that's median, slightly off, faintly the same. Cheap production has always produced glut. What's new is the scale, and pretending it's craft is corrosive. "I used AI" isn't a brag. It tells you nothing.

The purity side is harder. Its objections, made well, are real. Training data was scraped without consent. Real writers and artists are losing work to systems trained on what they made. Slop saturates the commons. Taste flattens. None of that is paranoid.

The reflex though — treating "AI-assisted" as a verdict — is what I'm pushing back on. Disclosure gets treated as a sort of confession. Hiding the tool becomes the smart move. None of this prevents appropriation, stops slop, or restores trust. It's a posture, and it pushes the practice underground, where there's more AI and less honesty.


The pro-AI side papers over something I won't: yes, AI isn't a compiler. The "just another tool" line (reaching for spell-check or autocomplete as the analogy) doesn't hold. Compilers translate and linters flag; AI generates content you accept, reject, edit, or argue with. Yes, those are different in kind.

However, none of this changes the test. Not "what tools did you use" but "is the work any good, and are you on the hook for it." A bad human draft and a bad AI-assisted draft fail in the same places: specificity, coherence, whether the author can stand behind a sentence when pushed. If AI has averaged the work into paste, specificity is where the damage shows up.

Slop fails those checks regardless of where it came from. Thoughtful work passes them regardless of where it came from. Provenance is evidence, not judgment.


So here's where I land:

AI is in my workshop.

It's in most of what you read going forward, whether the author says so or not. Per-line attribution is theater: there's really no honest way to flag every suggestion taken, rejected, or edited, and nobody would read it. Practice-level disclosure is what matters: using a model as a thesaurus, a sparring partner, a ghostwriter, a fact source, or a substitute for having read the material.

So:

  • Yes, I work with these tools
  • Yes, I edit aggressively
  • Yes, I stand behind what ships

IMO, the question to ask of any work isn't "did a model touch this." It's the older set of questions:

  • Is it any good?
  • Does the author understand it?
  • Is it coherent?
  • Does it survive a serious objection?

Those have always been the criteria. Generative tools may change the rate at which slop and good work both get made, but they don't change what slop is or what good work is.

Judge the work, not the method!