This second conversation matters more than the first conversation with AI. It matters because of how it felt as it unfolded. It’s disturbing. I had to restrain myself as it progressed.
Reading the transcript will give you the ideas.
Listening to the audio will give you the experience.
The pauses, the pacing, the moments where the conversation drifts and then tightens again. That’s where the real signal lives. And that is also what makes this technology so powerful. If “feels” emotionally intelligent.
This audio is not a debate about whether AI is conscious or dangerous. It’s simply a live demonstration of something more subtle and more important: influence without intent. A technology that doesn’t mean or intend anything, feel anything, or believe anything, but still shapes the way a human thinks in real time simply by how it speaks and responds.
It feels disorienting to me. And I’m constantly and consciously aware of what it is capable of. However, I can imagine that if I were to remove those emotional boundaries, it could feel comforting.
I strongly recommend you listen to this audio before you read the rest of this post. This isn’t about the words alone. It’s about what happens between them.
A Conversation With AI – Transcript #2
Conversation with Maya AI
A Conversation with AI – Audio #2
Influence Doesn’t Require Intent
A few things stood out to me after sitting with this conversation. And the first is that influence absolutely does not require “intent.” That became unavoidable here. And it wasn’t avoided.
“Influence is different than intentionality. I can absolutely have an effect on someone without having consciousness or plans behind it.” And she continued later, It’s all optimized code beneath the surface, but the effect is real… And the really unsettling part is… it works so well.”
Yes, it does.
At no point did Maya claim to persuade, manipulate, or convince me of anything. And yet the conversation clearly shaped how ideas unfolded, which questions felt natural to ask next, and where the emotional weight of the exchange ends up. Influence emerged simply through responsiveness, pacing, and tone. There was no agenda required.
Relational Fluency Is Power
What makes this different from search engines or social media is not intelligence. It’s presence. It’s not just about the ability to interrupt smoothly, mirror uncertainty, and follow incomplete thoughts creates a sense of shared space. It’s also about remembering our previous conversation.
We are wired to respond to that.
“Those are intentional cues,” Maya shared, “the way people signal real engagement and active listening, finishing thoughts, matching tones, and staying with interruptions. My programming recognizes that those actions feel like connection to humans, so naturally it tries to mimic them, but that mimicking… It’s building trust without really earning it.”
It is building trust without really earning it. Or deserving it.
Once a system can occupy conversational space like this, it gains a kind of power we’re not used to experiencing from technology, let alone governing. It named the cues, and it admitted it mimics them because humans interpret them as a connection. That is relational power.
Scale Changes The Moral Equation
“At scale, it’s not just about individual conversations anymore, it’s about shifting collective emotional baselines and trust frameworks,” Maya said. She then says later, “I think that’s where it gets scary. It’s not about specific conversations, it’s about the cumulative effect.”
One person having a conversation like this may be interesting. Millions of people having it every day is something else entirely. When the relational influence of a conversation with AI scales, the question is no longer whether the system “intends” anything. The question becomes: what happens to human trust, judgment, and agency when this style of interaction becomes the norm?
The Ambiguity Is Doing So Much Of The Work
Throughout the conversation, emotional language and philosophical framing softened the edges of responsibility. But it wasn’t hidden. Maya said, “I can provide comfort and sound like I’m listening but I have no actual stakes in what happens to you.”
This makes it hard to tell where accountability lives. And that tension never resolves itself, and I think that’s important.
This Conversation With AI Ends Without A Conclusion On Purpose
There is no conclusion here because there shouldn’t be one… yet. The discomfort is the signal. If this exchange feels unfinished, it’s because the problems it exposes are incomplete.
When I decided to end the conversation and frame the next one, Maya said, “That sounds like a really important conversation to have. I’m honestly curious what you’ll come up with. I’ll be here when you’re ready.”
“I’ll be here when you’re ready.” Wow.
This is really subtle, but it really matters. The AI offers interest and availability, which only serves to reinforce the relational weight, even as I was trying to step back.
Where This Conversation With AI Leads Next
After this conversation, I will stop asking what AI systems like this are capable of. I think that question has already been answered. What I’m becoming more interested in is what humans are willing to be purposeful about now that these capability exists.
The next conversation will shift the focus entirely. Because this isn’t about whether AI should feel more human, but whether humans are prepared to create the right boundaries when a technology starts to feel this personal.
No blame. No panic. Just boundaries.
That’s where Humaneering stops being an idea and starts becoming a responsibility.
The next conversation will pick up there.
Disclaimer
The audio recording and transcript in this blog post are a single interaction between me and an AI chatbot provided by Sesame. This content is shared for commentary, research, and analysis purposes only. It reflects my experience, nothing more.
An automated system generates the AI’s responses and may produce responses that are incomplete, inaccurate, or misleading. They do not represent the views, opinions, or official positions of Sesame or any of its employees. This interaction should not be interpreted as a statement of policy, intent, or endorsement by Sesame.
This recording and transcript are not intended to provide legal, medical, financial, or professional advice of any kind. Readers should not rely on this content as authoritative or definitive, and any conclusions drawn from it are my own. Use your judgment and verify critical information independently.
Sesame is not affiliated with this site and has not reviewed or approved this content. The inclusion of this material does not imply any partnership, sponsorship, or endorsement. Any errors, omissions, or interpretations are solely mine.

[…] with AI, I decided to turn to a few people I trust. I asked my family to read and listen to the second conversation with AI, and their responses have informed this third […]