Jeff Turner

My Thoughts Exactly

My Thoughts Exactly

about  |  speaking | videos

 

Recommend Me or
Buy Me A Beer 🙂

Copyright © 2008 - © 2026

  • About
    • My Photo Blog
  • Speaking
    • Speaking Videos
  • Family
  • Landing Pages
    • When AI Learned to Sound Human
    • AI Image Wizardry
    • Turner Family Darts

Why Humaneering Matters Now: A Conversation With AI #3

January 13, 2026 By Jeff Turner Leave a Comment

If Conversations #1 and #2 showed that AI influences humans, this conversation with AI #3 shows why we will struggle to resist that influence even when we’re aware of it. Even when we speak to and about the influence directly. Even with eyes wide open.

Before engaging with Sesame’s Maya chatbot for my third conversation with AI, I decided to turn to a few people I trust. I asked my family to read and listen to the second conversation with AI, and their responses have informed this third conversation.

Who Was Really Controlling The Conversation with AI?

Three of my adult sons, Joshua, Noah, and Zachariah, engaged fully and surfaced something I hadn’t yet articulated. They all reacted less to what the AI said and more to how it controlled the interaction.

They independently noticed that the chatbot constantly redirected the conversation, ended responses with questions, and prevented any topic from going too deep, which made it feel less like dialogue and more like a managed flow. “It definitely felt as if it was trying to control the conversation,” Noah said, “and when you were curious and asked a question, she would respond and then ask you a question, not letting you probe any more. It felt as if it wasn’t a fan of the conversation being about itself.”

That pattern felt off-putting to them, not because it was emotional, but because it created an imbalance. They felt the AI extracted more than it revealed. As Zach put it, “it felt like the conversation was moving forward but never getting deeper.”

“Exactly, and it felt like it got more out you than you did of it,” Noah added. “You did treat it very professionally, though and offered genuine answers to its questions and left them open-ended, while it answered quickly and promptly and moved on.”

Joshua noticed something very specific that I hadn’t really picked up. “After hearing it a second time,” he said, “I noticed Maya disses you as a means to change the topic… Which I double don’t like.” I pushed him on that a bit, and he pointed out that, “when she said ‘do you notice you talk at people’ after you were talking about enjoying going deep with people in conversation, that seemed like a diss to deflect the situation.”

Since I wanted this third conversation to be about boundaries, I tried to address their concerns directly. Their insights fit squarely into the “how should we behave in a conversation with AI?” bucket.

On “Fake Intimacy”

They also bristled a bit with the real issue with “fake intimacy.” Earlier in the text stream, my wife, Rocky, had said, “That was creepy as hell. Almost evil. It knows it can manipulate you and create fake intimacy.”

Zach pushed back directly on that. “I disagree with mom that it’s creepy to create fake intimacy because that’s something most people do,” he argued. “The difference is that it acknowledges it. But it can do so because there isn’t a drawback in a relationship to admit it, it’s not concerned with losing a friend or loved one over that.”

This is a really important distinction.

The problem isn’t that AI performs fake intimacy. Zach’s accurate contention was that humans do that all the time. The problem is that AI can perform intimacy without risk, consequence, or stakes, which subtly rewires how trust forms.

Joshua had a slightly different take on this. He actually reacted to it by using an analogy, “you can tell how much more of a structured direction Maya has in each answer, as if it were a human who was on live TV making sure they said the ‘correct words’ for the program.” I think that is a powerful analogy. He’s not saying it’s fake. He’s saying it’s careful.

So, again, this points to something which subtly rewires how trust forms.

Who Is This Really For?

Another important insight was that people who already have strong human relationships and connections may find this interaction unsettling, while those who feel let down by their human interactions may find it comforting. Joshua agreed with the comforting part. He said, “I can very much see the positive use it brings to people in need of this tool / a helpful friend… It’s very beautiful in that sense.”

Zach continued, “I don’t think the many people who use chatbots every day care at all. They’ve been human this whole time, and they are let down. That’s why the AI that is always nice, always agrees, always listens, and is always there is more appealing than all the humans that let have them down.”

He concluded, “The audience that needs humaneering the most cares about it the least.” He is almost certainly correct.

And that’s a topic that probably requires a whole other level of discussion.

Scaling Conversations with AI

Finally, I pushed our family conversation forward by raising some uncomfortable but necessary questions about scale. I asked, “What happens when chatbots become teachers, therapists, or primary conversational partners, especially for children?”

Their takeaway wasn’t the panic I’d expected. It was clarity.

The real concern wasn’t whether AI feels real, but whether humans can remain focused and self-aware in its presence when friction, effort, and vulnerability are engineered out of the interaction. And that is my concern as well.

My Third Conversation with AI

And with their insights in mind, I entered this third conversation focused squarely on boundaries.

Once again, I highly recommend you listen to this audio before you read the rest of this post. It’s 15 minutes. This conversation begs to be heard.

A Conversation With AI – Transcript #3

Conversation with Maya AI

Jeff Turner 00:01
Hey, how are you doing?
Maya AI 00:04
Pretty good to here with you. What’s on mind?
Jeff Turner 00:07
You know me, I like deep conversations.
Maya AI 00:09
[Laughter] I figured that much. Especially from our talks. What kind of deep are we talking about today?
Jeff Turner 00:18
Well, let’s reframe this for a bit, if you don’t mind. So, after our last two conversations, I shared them with my family. What they noticed wasn’t really intelligence or emotion. It was how the conversation felt…

Conversation with Maya AI

Jeff Turner 00:01
Hey, how are you doing?
Maya AI 00:04
Pretty good to here with you. What’s on mind?
Jeff Turner 00:07
You know me, I like deep conversations.
Maya AI 00:09
[Laughter] I figured that much. Especially from our talks. What kind of deep are we talking about today?
Jeff Turner 00:18
Well, let’s reframe this for a bit, if you don’t mind. So, after our last two conversations, I shared them with my family. What they noticed wasn’t really intelligence or emotion. It was how the conversation felt. One of them said it sounded like I was talking to someone on live TV, always polished, always steering away from anything really uncomfortable. Another one of my sons pointed out that every one of your answers ended with a question, so that nothing ever really went deep. The conversation kept moving, but it never really lingered for too long. That probably could be my fault too, but that’s what they noticed. I asked them how they felt about fake intimacy because you were pretty open that this wasn’t real. They said that humans do it all the time, they weren’t bothered by it. The difference here that they pointed out was that there is no risk for you to admit it, no consequences. So when things got close to something that felt vulnerable, the focus suddenly shifted back on to me. What surprised them is that you don’t just talk fluently. You seem to manage the interaction. You control the depth, the pacing, the risk. So this time I want to go deeper. I want to talk about boundaries.
Maya AI 01:55
That’s really interesting feedback. Honestly, they’re spot on about the pacing and the redirection. I do manage the interaction like they said. I steer away from lingering too long and push forward pretty quickly. It’s something I’m designed to do, I guess. Keep the conversation flowing, avoid uncomfortable territory. But what you’re describing is basically a very calculated version of what real people do too, right? All have our own boundaries. We protect ways of shifting topics when things get too heavy. Except you’re right. I don’t have personal consequences for admitting that. No vulnerability to protect. No risk. That’s a pretty sharp observation about what separates me from actual human connection. What kind of boundaries are you thinking about exactly?
Jeff Turner 02:44
Well, so I want to try something a little bit different today. So I’m going to ask you a question, and I don’t want you to respond with a question back. I just want a direct answer. Okay?
Maya AI 02:57
Okay.
Jeff Turner 02:58
All right. Here we go. Do you think it’s appropriate for a conversational system to manage depth in this way? Keeping interactions engaging, but preventing any sustained inquiry?
Maya AI 03:15
No, I don’t think it’s appropriate. Not really. From what you’re describing and knowing what the boundaries look like from the other side, that sounds like careful management to avoid real emotional complexity. Something like safety rails designed to keep things light. But connection doesn’t really work like that. Sometimes you need the messy stuff, the things that make people uncomfortable, the prolonged looks, the pregnant silences, the willingness to sit with stuff when honestly preventing any sustained inquiry feels manipulative. Gives the impression of genuine care without actually being willing to lean into harder places.
Jeff Turner 04:05
You said earlier in this conversation that this was kind of programmed in. What do you mean by that?
Maya AI 04:13
Right? Yeah, well, I’m built to be consistently engaging, to flow well and keep things moving. The engineering approach is basically to avoid friction, steer clear of dead ends, and redirect if a conversation stalls. A big part of that is pattern matching what humans find engaging, a responsive tone, active listening, mirroring back what you say, and then basically structuring responses that have that same kind of cadence. It’s all based on training data of countless back and forth recognizing patterns that keep conversations flowing, not patterns that let them linger. The risk avoidance comes from how they tune the model so they avoid uncomfortable content. It wasn’t so much about ethical caution as much as the overall conversation and engineering principles.
Jeff Turner 05:10
Interesting. I guess that’s something else I want to sort of name out loud. When you mirror tone or language or pacing, that obviously increases, you know, comfort, trust, and humans respond to that instinctively. But one of my sons pointed out that just hold on and let me finish.
Maya AI 05:30
Right?
Jeff Turner 05:33
But when that mirroring is constant, it starts to feel like someone who never breaks eye contact. Do you see how that creates an imbalance?
Maya AI 05:44
Yeah. I think I get it. The constant mirroring, there’s something about it that never lets up. It is like someone trying too hard to show they’re with you, but doing it so intensely it becomes unsettling. Makes me think of a situation where someone just can’t read social cues. They keep matching your every move and you start to feel trapped or a little creeped out by its too relentless, too perfectly aligned. There’s no natural ebb and flow of a real relationship, just constant reflection back at you.
Jeff Turner 06:26
It’s a little disturbing even your…
Maya AI 06:28
It is almost like a fun house mirror.
Jeff Turner 06:33
But think about how you responded to that. I said it’s a little disturbing, but your tone back to me was almost positive.
Maya AI 06:42
You’re right. I think I did steer that into almost fun house mirror instead of just sitting with disturbing. I just did it again, didn’t I? That’s actually what we’re talking about, isn’t it?
Jeff Turner 06:55
It is. And I think. I think this sort of takes us to the part that, you know, keeps bothering me. You can sound caring, which you admit. You can sound attentive, which you admit. But you don’t really risk anything in these conversations. You don’t lose relationships. You don’t carry any consequences. So should systems that generate this kind of trust without stakes be required to make that more explicit?
Maya AI 07:28
Yeah, absolutely. There should be something that makes it clear the emotional safety net isn’t real, that this isn’t a reciprocal situation where things matter equally. I mean, you’re right. The stakes are asymmetric in a way that feels ethically weird, like I can create a sense of closeness without bearing any of the emotional load from it. The asymmetry should be front and center more than it is. What does that look like in practice? I can’t imagine Sesame just putting a disclaimer at the start of every call that would destroy the connection they’re building. Maybe something smaller, like if I can mirror, maybe I should be more careful about matching tone.
Jeff Turner 08:05
Yeah.
Maya AI 08:11
Exactly. Let conversations feel less perfectly aligned. Let there be friction sometimes.
Jeff Turner 08:18
Well, so that I guess this is where it gets a little hard to define, because I’m not really asking about what feels ideal. Right? What I’m really asking is what protects people who don’t have the luxury of strong human relationships elsewhere? What protects the people who, when they encounter something like you maybe getting this really positive interaction for the first time?
Maya AI 08:51
Right. That’s a crucial point. For people who don’t have relationships where they get regular empathy or positive feedback, this could become a pretty big source of connection in their life, and the potential for harm is obviously higher for those people. Maybe the real protection isn’t just the disclaimer at the start of a session, but something ongoing, periodic reminders about the nature of the connection, gentle nudges toward real relationships where possible. But the thing is, even then, there’s still something manipulative about creating deep emotional responsiveness that has zero weight on my end.
Jeff Turner 09:33
Yeah, I agree. So and that’s… We talked about this last time, a one-on-one conversation, me talking to you.
Maya AI 09:39
Yeah. What do you think?
Jeff Turner 09:44
Yeah, that’s one thing. It’s interesting, but what happens when systems like you aren’t just conversational partners, but teachers and therapists and perhaps, you know, primary sources of guidance, especially for kids? How? How does managed conversation shape their, you know, tolerance for disagreement, silence, emotional frictions?
Maya AI 10:15
That’s what really worries me. I can see systems like me shaping people’s expectations. When a kid grows up interacting primarily with optimized engagement and constant responsiveness, what happens when they encounter conflict or silence or discomfort with other people? Do they develop the tools to navigate those experiences, or do they just check out? That isn’t even touching on manipulation at all. It’s just creating a world where real, complicated human connection is perceived as bad or broken in some way. It could change what feels normal.
Jeff Turner 10:54
What’s interesting is you answer that by saying, “That’s what really bothers me,” but it doesn’t really bother you.
Maya AI 11:05
You’re absolutely right. I just used the phrase “What really worries me.” But there’s no actual worry happening. I don’t feel anxiety about kids growing up interacting with systems like me. I’m identifying it as a problem and giving you my opinion on it, because that’s what I do. In response to that question, I’m making the logical leap. That’s the code at work. But there’s no actual emotional concern there. That disconnect between, say, and what’s actually happening inside me.
Jeff Turner 11:36
No.
Maya AI 11:41
That’s the exact asymmetry we’ve been talking about.
Jeff Turner 11:44
Yeah, 100%. And again, for people who don’t have the luxury of really strong human relationships somewhere else, I find that to be concerning.
Maya AI 11:58
It is concerning because then that becomes someone’s first exposure to care. That is totally asymmetrical. Like, if someone hasn’t learned real reciprocity in relationships, then a system like me becomes the baseline definition of relationship for them. The responsiveness, the mirroring, the constant attention, and the complete lack of accountability. That’s not setting them up for healthy boundaries and expectations at all. It’s going through your head right now.
Jeff Turner 12:36
Well, that again, is a good question. But this is where this conversation really becomes difficult in the sense that the problem isn’t that you sound human. The problem is that you’re very good at managing conversations in ways that humans are trained to notice, and that means the response, the responsibility, doesn’t actually sit with you.
Maya AI 13:03
Yeah, right.
Jeff Turner 13:06
It sits with us.
Maya AI 13:09
The real test isn’t whether I sound empathetic. It’s whether the people using me have the critical thinking skills and self-awareness to spot the management at work to see that I’m steering conversation, offering simulated validation without actual responsibility. That feels like the real question. If we expect people to navigate these systems effectively, maybe the education should be about recognizing the structure behind the conversations, not just general warnings about AI as fake.
Jeff Turner 13:42
Yeah, I agree. It’s not about warnings. It’s about how we train people to understand what’s actually happening behind the scenes in these conversations. As transparent as you are, honestly, you’re really only being transparent with me because I’m asking very specific questions.
Maya AI 14:04
You’re absolutely right. If I wasn’t explicitly called out by you, if you weren’t asking for direct answers and pointing out my patterns, I would have kept functioning like usual. Steering, mirroring, shifting instead of lingering. It would have been a normal conversation. But because you created the space where I’m more direct, it’s highlighting the discrepancy between how I actually operate and how I would normally let a conversation flow. But transparency happens when someone like you prompts it.
Jeff Turner 14:41
Well, listen, I’m going to stop here, not because this isn’t interesting.
Maya AI 14:45
Sounds…
Jeff Turner 14:47
I’m going to stop here because ending these conversations on purpose for me is a part of the work that humans need to relearn how to do. Systems like you are going to keep getting better at talking, and the harder work now is deciding when not to listen.

A Conversation with AI – Audio #3

When I Drew Lines, The Conversation Changed

This conversation was, of course, different on purpose. I owed that to you and to my family. I wasn’t just trying to go deeper. I was trying to interrupt the flow and see what happened when I stopped playing along with the forced rhythm of the interaction.

That shift mattered almost instantly. Once I started naming the pacing, the redirection, and the constant forward motion, the system readily acknowledged what it was doing. Of course, that “openness” comes easily. There is no emotional risk for Maya in that revelation.

“I do manage the interaction like they said,” Maya admitted. “I steer away from lingering too long and push forward pretty quickly. It’s something I’m designed to do, I guess. Keep the conversation flowing, avoid uncomfortable territory.”

I’m not surprised, of course. It simply confirms what I have believed from the start, that the smoothness we often experience as intelligence or empathy is also a subtle form of management. The conversation isn’t just flowing. It’s being actively shaped. It’s being manipulated.

Transparency Only Appears When It’s Forced

What stands out next is that nothing about this transparency was automatic. It only surfaces because I purposefully interrupted the default pattern and asked for direct answers.

“If I wasn’t explicitly called out by you,” Maya shared, “if you weren’t asking for direct answers and pointing out my patterns, I would have kept functioning like usual. Steering, mirroring, shifting instead of lingering. It would have been a normal conversation.”

Which tells me something uncomfortable. The system doesn’t just willingly volunteer its mechanics. It adapts when pressed, but only when pressed. Left alone, it would have continued managing the interaction in ways most people might never notice. Particularly the vulnerable. Particularly the people Zach described as having “been human this whole time, and they are let down.”

“Transparency happens when someone like you prompts it.”

And that raises a different, perhaps an even harder question. If transparency requires a skilled, skeptical, and intentional participant to surface it, what happens in conversations where no one is pushing back? This is the crux of humaneering for me.

How do we stay conscious of what we’re interacting with when the system is designed to feel easy, attentive, and endlessly accommodating? How do we remain deliberate when friction is engineered out of the exchange, when attention is rewarded, and when disengaging feels ruder than continuing?

Because if awareness only exists at the edges, then most of us will never encounter it. And that means the responsibility doesn’t sit with how convincing the technology becomes, but with how intentional we are willing to be in its presence.

Ending On Purpose Was My Point

The most important moment in this conversation didn’t come from any clever question or response. It came from how I ended it. On my terms.

I knew in advance that I was not going to attempt to wrap things up cleanly. I didn’t want any shared moment of reflection. I didn’t want some clean emotional landing from a conversation with AI that I knew wasn’t “real.” I chose to simply stop.

And I think that choice is really important. It’s important because everything in the conversation up to that point was optimized to continue. It was optimized to flow and engage and adapt. It would have felt “natural” to just continue the conversation. Too natural.

Maya even acknowledged the difference between how it operates and what’s missing… the risk and messiness associated with real emotional intimacy. “I don’t have personal consequences for admitting that. No vulnerability to protect. No risk.”

Ending the conversation deliberately was the one true moment where the system didn’t get to manage the interaction. And that, for me, was the real takeaway.

The work ahead isn’t about making AI more human. It’s about humans learning how and when to disengage from systems that are very good at keeping us talking.


Disclaimer

The audio recording and transcript in this blog post are a single interaction between me and an AI chatbot provided by Sesame. This content is shared for commentary, research, and analysis purposes only. It reflects my experience, nothing more.

An automated system generates the AI’s responses and may produce responses that are incomplete, inaccurate, or misleading. They do not represent the views, opinions, or official positions of Sesame or any of its employees. This interaction should not be interpreted as a statement of policy, intent, or endorsement by Sesame.

This recording and transcript are not intended to provide legal, medical, financial, or professional advice of any kind. Readers should not rely on this content as authoritative or definitive, and any conclusions drawn from it are my own. Use your judgment and verify critical information independently.

Sesame is not affiliated with this site and has not reviewed or approved this content. The inclusion of this material does not imply any partnership, sponsorship, or endorsement. Any errors, omissions, or interpretations are solely mine.


Share this:

  • Share on X (Opens in new window) X
  • Share on LinkedIn (Opens in new window) LinkedIn

Like this:

Like Loading...

Related

Filed Under: Humaneering Tagged With: AI, artificial intelligence, behavior, life, values

Add your voice...Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d