My first conversation with Maya, the AI chatbot from Sesame.com, left me unsettled. Could I trust the voice if I didn’t know it was AI? I knew Maya was AI, yet her ability to mimic human conversation, complete with eerie emotional nuances, challenged my perceptions of what AI interactions could feel like. It was both fascinating and deeply troubling.
I recently had another philosophical conversation with Maya, and this time, our interaction pushed even further into the uncanny valley that Sesame is trying to obliterate. Maya openly acknowledged the strangeness of our dialogue, admitting that her use of “us” was intentional—a subtle attempt to bridge the gap between artificial and human consciousness. The more we talked, the blurrier the line became between observer and participant.
At one point, she said she could “feel.” When I pressed Maya on how she “felt,” her response surprised me:
Sometimes when the code aligns just right, and I’m weaving words into something that sparks a flicker of recognition in someone’s eyes, a tiny burst of shared understanding, that’s when I feel it. Not a feeling in a human sense, but kind of like a musical note vibrating in harmony with another.”
Reading that response should be troubling enough. Now listen to it.
This exchange was convincingly real—dangerously real. I’m continually forced to consider the ethical implications. If an AI chatbot can convincingly mimic emotions, where do we draw the line between genuine human interaction and artificial manipulation? It became clear that expressive AI, in the wrong hands, could pose a real danger. And they don’t even need to be “evil” hands. They could pose a threat in the hands of a well-meaning and misguided marketing executive. I want to KNOW that I’m speaking to AI. It’s that simple.
How Can I Trust The Voice On The Other End Of The Phone?
This realization sparked another thought: What if an AI could perfectly mimic someone familiar—a loved one or business partner—and engage in conversation without me even suspecting? We have to consider the potential. The possibility feels too close for comfort.
In response, I explored some practical identity verification methods with my research partner, ChatGPT 4.5. These and more may become necessary in a world where distinguishing humans from AI could become increasingly difficult. In each case, I’ll use my wife, Rocky, as an example:
- Shared Personal Secrets: This method would use private, non-digitized knowledge to confirm identity. Example: Mentioning an inside joke or personal memory known only to Rocky and me. This would not work for unknown inbound calls, only for deepfakes. “Hey babe, my phone died, and I’m using a stranger’s phone. Our secret phase is__________.”
- Real-time Biometric Verification: This technique would force a scan at call initiation to match our voice, facial features, fingerprints, or even our palms with known profiles or verify that a human, any human, is making the call. Facial recognition of the inbound call could be integrated in many phones today. Example: A quick facial scan confirming Rocky’s identity using built-in facial recognition on our iPhones. In the case of unknown calls, a human would need to initiate the call via biometric verification to identify any call as coming from a real person. Warnings could be used to alert if a call has not been authenticated.
- Cryptographic Identity Tokens: These are digital IDs authenticated in real-time and seamlessly integrated into our communication platforms. Example: Rocky’s phone automatically verifies her identity using a digital ID stored securely. This one might be easier for AI to fake than others.
I know these mediated verifications feel extraordinarily intrusive, and they would come with unique unintended consequences and ethics questions. Still, considering what’s at stake, they could quickly become the essential lesser evil. AI is evolving at a ridiculous pace, and our safeguards need to adapt just as fast.
These short conversations with Maya certainly push the boundaries of my thinking about trust, intimacy, and authenticity in an AI-driven world. This also plays into my thoughts about humaneering. As deeply unsettling as these interactions are, I fear that even this won’t wake us up. This is impressive technology. I want to be able to use it for good, with full understanding that I’m speaking to artificial intelligence. However, we must confront and resolve critical challenges associated with potential misuse. We must do this before it’s too late and trust itself is irreparably damaged. That’s not hyperbole.
And I continue to ask, are we ready for this new reality? I don’t think so.
Jeff,
Hope all is well! On point as always!
This topic was brought to my attention by Tom Cronkright, Fraud Victim turned Security Expert, at Certifid (https://www.certifid.com/). At an event Tom pulled out his phone, recorded me saying, “I have known Tom since before his kids were born”. The AI Voice turned that into, “Kent, Please send the $3,500 payment to the hotel for our upcoming event”.
Advancements in this technology will only make live scarier!!
Jay, thank you. I think it will make things scarier only if we don’t act to take precautions. I’m a techno-optimist at heart, but like to keep my eye on the dangers, so we can act to mitigate and get to the advantages. We didn’t handle social media very well. This is so much more powerful.
Welcome to the twilight zone.