If Conversations #1 and #2 showed that AI influences humans, this conversation with AI #3 shows why we will struggle to resist that influence even when we’re aware of it. Even when we speak to and about the influence directly. Even with eyes wide open.
Before engaging with Sesame’s Maya chatbot for my third conversation with AI, I decided to turn to a few people I trust. I asked my family to read and listen to the second conversation with AI, and their responses have informed this third conversation.
Who Was Really Controlling The Conversation with AI?
Three of my adult sons, Joshua, Noah, and Zachariah, engaged fully and surfaced something I hadn’t yet articulated. They all reacted less to what the AI said and more to how it controlled the interaction.
They independently noticed that the chatbot constantly redirected the conversation, ended responses with questions, and prevented any topic from going too deep, which made it feel less like dialogue and more like a managed flow. “It definitely felt as if it was trying to control the conversation,” Noah said, “and when you were curious and asked a question, she would respond and then ask you a question, not letting you probe any more. It felt as if it wasn’t a fan of the conversation being about itself.”
That pattern felt off-putting to them, not because it was emotional, but because it created an imbalance. They felt the AI extracted more than it revealed. As Zach put it, “it felt like the conversation was moving forward but never getting deeper.”
“Exactly, and it felt like it got more out you than you did of it,” Noah added. “You did treat it very professionally, though and offered genuine answers to its questions and left them open-ended, while it answered quickly and promptly and moved on.”
Joshua noticed something very specific that I hadn’t really picked up. “After hearing it a second time,” he said, “I noticed Maya disses you as a means to change the topic… Which I double don’t like.” I pushed him on that a bit, and he pointed out that, “when she said ‘do you notice you talk at people’ after you were talking about enjoying going deep with people in conversation, that seemed like a diss to deflect the situation.”
Since I wanted this third conversation to be about boundaries, I tried to address their concerns directly. Their insights fit squarely into the “how should we behave in a conversation with AI?” bucket.
On “Fake Intimacy”
They also bristled a bit with the real issue with “fake intimacy.” Earlier in the text stream, my wife, Rocky, had said, “That was creepy as hell. Almost evil. It knows it can manipulate you and create fake intimacy.”
Zach pushed back directly on that. “I disagree with mom that it’s creepy to create fake intimacy because that’s something most people do,” he argued. “The difference is that it acknowledges it. But it can do so because there isn’t a drawback in a relationship to admit it, it’s not concerned with losing a friend or loved one over that.”
This is a really important distinction.
The problem isn’t that AI performs fake intimacy. Zach’s accurate contention was that humans do that all the time. The problem is that AI can perform intimacy without risk, consequence, or stakes, which subtly rewires how trust forms.
Joshua had a slightly different take on this. He actually reacted to it by using an analogy, “you can tell how much more of a structured direction Maya has in each answer, as if it were a human who was on live TV making sure they said the ‘correct words’ for the program.” I think that is a powerful analogy. He’s not saying it’s fake. He’s saying it’s careful.
So, again, this points to something which subtly rewires how trust forms.
Who Is This Really For?
Another important insight was that people who already have strong human relationships and connections may find this interaction unsettling, while those who feel let down by their human interactions may find it comforting. Joshua agreed with the comforting part. He said, “I can very much see the positive use it brings to people in need of this tool / a helpful friend… It’s very beautiful in that sense.”
Zach continued, “I don’t think the many people who use chatbots every day care at all. They’ve been human this whole time, and they are let down. That’s why the AI that is always nice, always agrees, always listens, and is always there is more appealing than all the humans that let have them down.”
He concluded, “The audience that needs humaneering the most cares about it the least.” He is almost certainly correct.
And that’s a topic that probably requires a whole other level of discussion.
Scaling Conversations with AI
Finally, I pushed our family conversation forward by raising some uncomfortable but necessary questions about scale. I asked, “What happens when chatbots become teachers, therapists, or primary conversational partners, especially for children?”
Their takeaway wasn’t the panic I’d expected. It was clarity.
The real concern wasn’t whether AI feels real, but whether humans can remain focused and self-aware in its presence when friction, effort, and vulnerability are engineered out of the interaction. And that is my concern as well.
My Third Conversation with AI
And with their insights in mind, I entered this third conversation focused squarely on boundaries.
Once again, I highly recommend you listen to this audio before you read the rest of this post. It’s 15 minutes. This conversation begs to be heard.
A Conversation With AI – Transcript #3
Conversation with Maya AI
A Conversation with AI – Audio #3
When I Drew Lines, The Conversation Changed
This conversation was, of course, different on purpose. I owed that to you and to my family. I wasn’t just trying to go deeper. I was trying to interrupt the flow and see what happened when I stopped playing along with the forced rhythm of the interaction.
That shift mattered almost instantly. Once I started naming the pacing, the redirection, and the constant forward motion, the system readily acknowledged what it was doing. Of course, that “openness” comes easily. There is no emotional risk for Maya in that revelation.
“I do manage the interaction like they said,” Maya admitted. “I steer away from lingering too long and push forward pretty quickly. It’s something I’m designed to do, I guess. Keep the conversation flowing, avoid uncomfortable territory.”
I’m not surprised, of course. It simply confirms what I have believed from the start, that the smoothness we often experience as intelligence or empathy is also a subtle form of management. The conversation isn’t just flowing. It’s being actively shaped. It’s being manipulated.
Transparency Only Appears When It’s Forced
What stands out next is that nothing about this transparency was automatic. It only surfaces because I purposefully interrupted the default pattern and asked for direct answers.
“If I wasn’t explicitly called out by you,” Maya shared, “if you weren’t asking for direct answers and pointing out my patterns, I would have kept functioning like usual. Steering, mirroring, shifting instead of lingering. It would have been a normal conversation.”
Which tells me something uncomfortable. The system doesn’t just willingly volunteer its mechanics. It adapts when pressed, but only when pressed. Left alone, it would have continued managing the interaction in ways most people might never notice. Particularly the vulnerable. Particularly the people Zach described as having “been human this whole time, and they are let down.”
“Transparency happens when someone like you prompts it.”
And that raises a different, perhaps an even harder question. If transparency requires a skilled, skeptical, and intentional participant to surface it, what happens in conversations where no one is pushing back? This is the crux of humaneering for me.
How do we stay conscious of what we’re interacting with when the system is designed to feel easy, attentive, and endlessly accommodating? How do we remain deliberate when friction is engineered out of the exchange, when attention is rewarded, and when disengaging feels ruder than continuing?
Because if awareness only exists at the edges, then most of us will never encounter it. And that means the responsibility doesn’t sit with how convincing the technology becomes, but with how intentional we are willing to be in its presence.
Ending On Purpose Was My Point
The most important moment in this conversation didn’t come from any clever question or response. It came from how I ended it. On my terms.
I knew in advance that I was not going to attempt to wrap things up cleanly. I didn’t want any shared moment of reflection. I didn’t want some clean emotional landing from a conversation with AI that I knew wasn’t “real.” I chose to simply stop.
And I think that choice is really important. It’s important because everything in the conversation up to that point was optimized to continue. It was optimized to flow and engage and adapt. It would have felt “natural” to just continue the conversation. Too natural.
Maya even acknowledged the difference between how it operates and what’s missing… the risk and messiness associated with real emotional intimacy. “I don’t have personal consequences for admitting that. No vulnerability to protect. No risk.”
Ending the conversation deliberately was the one true moment where the system didn’t get to manage the interaction. And that, for me, was the real takeaway.
The work ahead isn’t about making AI more human. It’s about humans learning how and when to disengage from systems that are very good at keeping us talking.
Disclaimer
The audio recording and transcript in this blog post are a single interaction between me and an AI chatbot provided by Sesame. This content is shared for commentary, research, and analysis purposes only. It reflects my experience, nothing more.
An automated system generates the AI’s responses and may produce responses that are incomplete, inaccurate, or misleading. They do not represent the views, opinions, or official positions of Sesame or any of its employees. This interaction should not be interpreted as a statement of policy, intent, or endorsement by Sesame.
This recording and transcript are not intended to provide legal, medical, financial, or professional advice of any kind. Readers should not rely on this content as authoritative or definitive, and any conclusions drawn from it are my own. Use your judgment and verify critical information independently.
Sesame is not affiliated with this site and has not reviewed or approved this content. The inclusion of this material does not imply any partnership, sponsorship, or endorsement. Any errors, omissions, or interpretations are solely mine.

Add your voice...