“I do not consent.”
That was my nephew Shane’s response to a text message I sent. I had no idea what he meant. And I wouldn’t understand until later in our chat string.
Shane and I hadn’t texted in over a year. Out of nowhere, he reached out to talk about AI detection software. He wanted to know if I had heard of tools that could tell whether audio, video, or an image was generated by a machine. My sister had sent him one of my AI voice posts, I’m A Little Creeped Out By You. He thought it was interesting. And he thought the AI detection software needed to exist. “Otherwise,” he wrote, “we will not be able to recognize reality.”
I was in the midst of editing a blog post about a voice-enabled AI chatbot I am going to unleash here soon. The timing was purely coincidental.
What happened next was something different. We were actually already talking about the same thing. We just didn’t know it yet.
How We Got To “I Do Not Consent”
When Shane’s text came in, I smiled. We don’t communicate that often, so it was a welcome surprise. The fact that he was asking me about AI was particularly rewarding. He genuinely liked my writing and wanted my opinion.
Early in the exchange, he said, “Hope you didn’t unleash the AI bot on me.” I had no idea what he meant, so I just replied, “LOL. I haven’t seen the video of Bibi. And thanks for reading my stuff.”
He came back: “Of course. But you didn’t answer my question about unleashing the AI bot on me.” I still didn’t put it together. I was in the middle of writing a post about a voice AI chatbot, so I said, “I’m doing that later this week. Literally.”
That’s when he said, “I do not consent.”
I kept joking. He kept not joking. Fourteen messages later I wrote, “This entire conversation was AI generated.” And that’s when it finally landed. “Yeah, I knew something was off from the first response,” he replied. “I told my friend I was with that, and they thought I was crazy.”
He Wasn’t Joking
The problem wasn’t Shane. The problem was me.
My Bubble
I spend my free time building tools, writing about AI, and testing voice clones. I seem to continually mistake familiarity for common knowledge, assuming again and again that most people are, more or less, keeping up.
Shane doesn’t spend his time in this world. When I asked him if he minded if I wrote about our exchange, he said, “Yeah, not at all. Hopefully I don’t come off as too much of an idiot lol.”
He’s not an idiot. Not even close.
Shane is sharp. He reads my stuff. He tries to stay current. And his conclusion, after a text exchange with his own uncle, is that he doesn’t know what to trust anymore. That’s not a failure of intelligence. That’s a completely rational response to the environment he’s living in.
How Many Shanes Are Out There Feeling The Same Way?
Because he’s definitely not alone. In fact, the very same day, an article about British photographer Robert Wilson appeared in the “photography” category of my feed reader. Wilson had created 12 individual covers for Radio Times magazine, shooting 210 high-resolution photographs with a locked-off camera to produce a looping video of comedians trying not to laugh.
The result was not entirely unexpected: Then “the internet” decided it was AI. When Wilson explained exactly how he made it, one commenter replied, “Then that’s an awful lot of work to produce something that looks exactly like AI.”
I couldn’t help tie this to our conversation. Wilson had all the receipts. He described the process. He had the complete evidence, and none of it mattered. The prior was already set. Which means the problem isn’t just that synthetic content is flooding the zone and making it hard to identify what’s real. The current runs the other way now, as well. Real things are being mistaken for synthetic things.
An Uncle’s genuine reply to his nephew’s text. A photographer’s genuine craft. Both of those were flagged as machine-made. Both were defended by their creators. And neither defense was fully believed.
It’s Even Broader Than That
Right now, in the middle of an active military conflict, the head of the FCC is warning broadcast television stations that their licenses could be at risk over war coverage he’s labeled fake. Some of that coverage turned out to be well-sourced. Some of the photos and videos circulating online as news were actually fabricated, planted by the Iranian state media. Or just internet trolls.
The problem is that real disinformation and legitimate journalism are running side by side in the same feeds, making the same claims about themselves. Reporters are defending their sources. Officials are questioning their motives. And somewhere in the middle of all of it, we’re all trying to stay informed, and we’re somehow supposed to figure out which is which.
That’s the environment Shane is navigating. It’s the one we’re all navigating now. And this isn’t a technology or an intelligence problem. It’s a signal problem.
When the volume of noise is high enough, and enough of that noise is designed to look like signal, the rational response is skepticism of everything. Including a text response from a family member. Including a photographer’s 210 carefully shot frames. Including what you’re reading right now.
The ground doesn’t feel solid to people who don’t live inside the information space the way some of us do. Then again, it doesn’t feel solid to many of us who do. And maybe the more honest question is whether the ground actually is solid, or whether the rest of us have just stopped noticing the quicksand we’re standing in.
Orwell Was Wrong. Huxley Wasn’t.
I keep coming back to something Neil Postman wrote in 1985. Not because it’s new thinking, but because it’s the kind of writing that lands harder and harder the more you sit with it.
Postman opened “Amusing Ourselves to Death” by comparing two visions of the future. Orwell’s and Huxley’s. And he argued we’d been worrying about the wrong one.
What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance.
Postman concludes the introduction with this comment, “This book is about the possibility that Huxley, not Orwell, was right.”
Nobody deprived Shane of information. Nobody suppressed Wilson’s photographs. The sea of irrelevance did all the work.
Huxley was right. And so was Postman.
We Need a Safe Word
Before we ended our conversation, Shane asked me to do something I’m hearing more and more. Shane said this, “Need to figure out our AI safe word.”
Shane isn’t paranoid. He isn’t technophobic or a Luddite. He’s a smart guy trying to navigate a world where the signals he used to trust have stopped being trustworthy. Where a text response from family reads like a bot. Where a photographer’s genuine work product looks like a machine’s output. Where real journalism and state-planted lies share the same feed and the same font and the same algorithms.
When I asked him how he was feeling about all of it, he was quick to respond. “I don’t know what to trust anymore.”
Neither do a lot of people. And the ones who think they do might just be standing a little deeper in the quicksand than they realize.

I wish I could post this on Instagram and Facebook to share with all my friends, kids, kids’ friends, kids’ friends’ parents. I’ve jokingly talked about the “death of the internet” because everything on it is questionable reality… but the trust disappearing in any form of communication besides face to face is a more staggering reality. Thanks for sharing this.
Jennifer, I’d be happy for you to post it to those places, since I’m no longer there. 🙂 AI-generated content is projected to make up 90% of online content by 2026, and Cloudflare predicted this past week that AI bot traffic is expected to surpass human browsing traffic by 2027. No need to joke about it, it’s real. The inevitable result is going to be some loss of trust… but I’m not sure it has to be a complete loss of trust. If, for example, some guy in his basement is just pumping out AI slop, I won’t trust that. But if the New York Times begins using some form of generative AI to help them write, but it is based on their investigative data, I would probably trust that. I’d know it was fact-checked and that a reputable news organization stood behind it. I’ve been thinking about writing about this angle for a long time and may do so. Trust should be less about “what” wrote a piece and more about “who” stands behind it.