Jeff Turner

My Thoughts Exactly

My Thoughts Exactly

about  |  speaking | videos

 

Recommend Me or
Buy Me A Beer 🙂

Copyright © 2008 - © 2026

  • About
    • My Photo Blog
  • Speaking
    • Speaking Videos
  • Family
  • Just Dug Up
  • Landing Pages
    • Humaneering
    • When AI Learned to Sound Human
    • AI Image Wizardry
    • Turner Family Darts

Is “Are You Using AI?” The Right Question?

April 20, 2026 By Jeff Turner 2 Comments

Jay Thompson wrote a blog post. He wrote every word of it, then published it on his new site, Retired.Living. Seventeen days in, a reader says to him, “I see some of those double dashes. Are you using AI?”

He wasn’t. And the main reason he had an answer ready was that he’d published an “About Using AI” page before the site went live. Jay saw this moment coming. He prepared for the accusation. And then it happened… in just seventeen days.

This is where we are now. This is the new default assumption. Writers like Jay are proactively and pre-emptively building their AI defense before the indictments even arrive. Recently, I had to defend my own words even in a text conversation with my nephew. My nephew’s final analysis? “I don’t know what to trust anymore.” Last week, my wife shared an amazing photo she captured of the starter’s gun firing at the Arcadia Invitational and the first comment in our family chat was, “AI.”

Of course, they were joking. That’s the thing with jokes, though. They’re often funny because they highlight a relatable, often uncomfortable, aspect of reality. This is indeed our new reality. And some of it is funny. But most of it isn’t.

Two Problems With Tools That Detect If Someone Is Using AI

Detection tools aren’t helping. Neither are all the people claiming they can tell when someone is using AI. They aren’t solving the “problem.” They may actually be making the situation worse. There are at least two problems they present.

AI Detection Tools Are Losing The Race

First, they’re losing the arms race as both AI and humans are getting better at fooling the detection tools. The internet is flooded with examples. In addition, they often flag human real human content as artificial. Highly structured, formal, or error-free writing is frequently flagged for lacking the expected “human randomness”.

I use Grammarly religiously. I’ve been using it since it launched. I love it. The problem? Using AI-powered tools like Grammarly to clean up writing can mistakenly trigger detection software. Frankly, the detection software industry is losing an arms race it started.

AI Detection Tools Are Making The Situation Worse

For every article you find where someone’s real work was flagged as AI, you can find another article where someone fooled the AI Detection tools by using simple editing techniques. This isn’t a game. And the focus is simply wrong.

We’re treating this as a technical issue to be solved.

This has never been a technical problem.

Trust Doesn’t Live In The Words. It Never Has.

Here’s the real problem: Authorship has always been more complicated than we pretend. AI didn’t create that. It just forced us to look at it.

This has never been about who controlled the pen. Or the typewriter. Or the keyboard. And my newly held position is that AI is just a new keyboard. Just a highly influential one.

Presidents often don’t write their own speeches. Biographies get ghostwritten. CEO’s send emails written by staff. I’ve personally written more emails for consulting clients than I could possibly count. Each of them was sent under the client’s name, not mine.

Trust Lives In The Person Delivering The Message

We’ve always extended trust to the person at the podium, not the person who wrote their speech. John F. Kennedy is famous for his speeches, but he didn’t write them. Some of Kennedy’s most famous speeches were written by Ted Sorensen. And here’s the thing: while Sorensen wrote the words, the policies, judgments, and ideas were Kennedy’s.

We do the same for corporate communications that are written by a team of staff, not by the person sending the email or whose name signs the document. And we don’t care. Or, more correctly, we only care that the person signing it stands behind it.

Again, I have written hundreds of emails for consulting clients. I work closely with them. I know what outcome they’re seeking. They ask me to write them for any number of reasons. Sometimes, they just can’t find the words themselves. Sometimes it’s because they’re too close to it. Sometimes it’s because their writing style isn’t the right tone for the moment. But in every instance, they are delivered by the client. Not one of them has ever said, “Here’s something Jeff Turner wrote for me.” And I would have advised against it.

I could go on and on. Google Docs allows teams to easily share authorship of important documents, and nobody cares when a single human is given credit for the work when it is delivered to the client. It’s commonplace. I would argue it’s expected.

Companies hire marketing and PR firms to write their public-facing corporate content. Those companies actually win awards for doing it well. Nobody is castigating the company for not writing the copy themselves. And my bet is that this practice of seeking writing help has been going on since writing was invented.

“Authorship” has always been complicated. AI didn’t create that. But it is making us confront it.

The Real Question: Where Does Trust Actually Live

Trust doesn’t live in the writing. Trust lives in the relationship.

It lives in the person or company behind the writing. It lives in the name on the signature line. And not in a disclosure or an “About Using AI” page either. That’s just paperwork, not trust.

Trust is a track record. Trust is a history. Trust accumulates post by post, interaction by interaction, moment by moment, over time.

My good friend and mentor Bill Leider taught me this. He said, “Behavior is the truest form of communication.” Consistent action is the only proof that counts. If you’re using AI to help you write, I truly don’t care. I care about you. I care about the person who stands behind it.

What I worry about is not who wrote the words, but who owns the thinking.

The Problem With Using AI Thoughlessly

Sune Selsbæk-Reitz wrote a wonderful piece recently titled “Anchors Without Authors.” It explains why all of this is more complex than we imagine. And the reason is “anchoring.”

Tversky and Kahneman showed decades ago that the first number you encounter pulls all your subsequent estimates toward it, even when you know the number is arbitrary. Sune’s accurate argument is that AI responses function the same way.

When you’re using AI, the responses arrive fluent, structured, and confident. They arrive before you’ve fully formed your own position. And this fluency creates a feeling of truth, even in its absence.

You don’t start from your question anymore. You start from the answer. And from there, you refine. You adjust. You move within the frame rather than questioning it. The anchor does its work without announcing itself as influence.

The problem is not that we are using AI to write. It’s how we are using AI to write.

This Blog Post Was Built in Close Partnership With AI

Every observation in it is mine. I used Claude Opus 4.6 to help me think it through. It started with a long voice transcription where I just rambled my thoughts into my iPhone using WisprFlow. I then fed it to Claude, and we went back and forth riffing over the potential flow of this blog post. It ended with an organized structural outline. But I wrote the words here. But even if I didn’t, I own this.

But here’s the thing: I had to keep asking whether the shape of the argument was mine too. The question Sune asked at the end of his essay is about precaution. It’s not a tool or a process. It’s a question that should lead to developing and nurturing an interrogation habit.

“Would I have arrived here on my own?“ If yes, the AI accelerated something that was already mine. If no, I didn’t write it. It’s not mine. I accepted it. And there’s a difference between those two things that no disclosure page is ever going to capture.

Using AI: Conscious vs. Unconscious Acceptance

Conscious acceptance is editing. It’s craft. Conscious acceptance is a writer making a deliberate choice about what belongs. About what matters.

Unconscious acceptance is where agency slips. And this doesn’t happen in big moments, but in small ones. It’s the sentence that sounds right. The turn you didn’t see coming but nodded along to without pushing back. It’s the frame you’d might have eventually reached on your own, maybe, but now you can’t be sure. You want to be sure. You need to be sure.

using ai

Sune ended his piece with a question: Would I have arrived here on my own? His piece landed just as I was thinking through this, just after reading Jay’s LinkedIn post. My comment on Jay’s post (you can read it here) concluded with, “AI may be able to generate strings of words, but only we can mean them.”

In my conversation with Claude, as I thought through this post, I said, “If I could have gotten here without AI, then it accelerated something that was actually mine, not something handed to me.” Later that day, I found Sune’s Substack article and left a comment.

He and I ended up in the same place, from completely different directions, on the same day. That’s either a coincidence or an argument. I decided it was an argument.

Outside forces have always shaped what we write. A conversation. A book. An editor. A consultant. Now an AI. None of this is new, and it’s not the real threat.

The question isn’t whether outside forces shape your writing. They always have.

The question is whether you’re awake when they do.


Share this:

  • Share on X (Opens in new window) X
  • Share on LinkedIn (Opens in new window) LinkedIn

Like this:

Like Loading...

Related

Filed Under: Commentary Tagged With: artificial intelligence, humaneering, trust, writing

Comments

  1. Jay T says

    April 21, 2026 at 7:55 am

    I have a new mantra for when I write — Would I have arrived here on my own? (OH NO! An em dash…)

    That, and “Trust doesn’t live in the writing. Trust lives in the relationship.”

    Also, I’m going to be awake, alert, and aware.

    Another terrific piece, Jeff.

    Reply
    • Jeff Turner says

      April 21, 2026 at 10:22 am

      Thank you, Jay. We have to have a whole new level of consciousness to get the real benefit from working with AI. Thanks for stirring my thinking.

      Reply

Add your voice...Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

 

Loading Comments...
 

    %d