It is inevitable that “AI you can trust” will become a rallying cry for competing in the artificial intelligence space. After all, we’re in a race for trust. Not long after I posted Why Humaneering Is A Competitive Strategy, Oracle NetSuite’s PR team released a partner article on VentureBeat titled “Creating a glass box: How NetSuite is engineering trust into AI.“
The title caught my eye immediately, of course, but their closing statement really made me stop. They concluded, “In a world of opaque models and risky promises, the companies that win won’t just build smarter AI. They’ll build AI you can trust.” So I decided to dive into it just a bit.
The “Engineering Trust Into AI” Pitch
It’s no secret. Most AI product announcements want to sell you the magic. Actually, that’s true of most product announcements. They catch your attention with “look at this trick,” then stick the risk and the mess in footnotes or, more likely, their seldom-read terms of service.
This one led with trust. When a vendor puts “engineering trust” in the headline, they’re saying something bigger than “we added artificial intelligence,” and that shift in marketing talking points matters. It’s also likely to become a trend.
The phrase “glass box” also grabbed my attention, obviously. It suggests the opposite of crossing your fingers and hoping the output looks intelligent, or at least makes sense to humans. A glass box implies you can see what went in, what came out, and what happened in between. If a system touches money, permissions, compliance, or client outcomes, that’s not a nice-to-have feature. That should be a big deal.
Now, I’m not bringing this up to take pot shots at NetSuite or anyone else. Partner articles are marketing, and marketing will always paint the most positive picture. However, I am most certainly reading this as a claim and a promise: “We built AI you can trust.”
I guess my question is this: What does that really mean?
Throwing Rocks At Glass Boxes (On Purpose)
I want to be clear, I appreciate the direction NetSuite is going here. They deserve credit for even talking about “glass‑box AI.” Most enterprise AI marketing still hides behind vague promises about accuracy and speed. Putting “AI you can trust” at the heart of the story is progress, and treating transparency and auditability as features, not fine print, is a meaningful step forward.
So when I say I’m going to “throw rocks at glass boxes,” I don’t mean “tear this down.” That’s not my intent. What I mean is, “push on it” just a little bit. After all, this is still just vendor marketing, not an independent audit. The article tells us how NetSuite describes its governance, not how customers actually use it under pressure. So, if we actually want these systems to hold real money, real data, and real people’s lives, we should hit them with hard questions before putting them into production.
AI Trust Can’t Just Be A Feeling
“Trust” in AI isn’t something you get because the output feels competent, confident, or human. People mistake that feeling for a trust signal all the time, and the AI vendors lean into it because it sells. Real trust, however, comes from things you can point to, test, and defend when the stakes show up. Trust in AI requires transparency, evidence, boundaries, and accountability.
When I say trust in AI requires transparency, I mean the product needs to show its work in places where the decisions really matter. Users should be able to see when AI is involved, what inputs it used, what it’s assuming, and also what it can’t possibly know. AI doesn’t seem to like to say, “I don’t know,” outside of specific RAG training. So, if the only way to understand an AI result is some variation of “because the model said so,” they’re not building trust, they’re doing a magic trick.
When I say trust in AI requires evidence, I mean you need an audit trail that survives stress. You should be able to answer: who ran it, when they ran it, what data it saw, what model version produced it, and what it output. It needs to show its work. If you aren’t able to reconstruct a bad outcome after the fact, you’re not going to be able to manage risk, and you’re not going to be able to defend your organization when someone challenges the AI decisions.
When I say trust in AI requires boundaries, I mean you need guardrails, not just features. You need permissions, approvals, overrides, and governance. If AI can take actions that your people can’t define or constrain in advance, you have probably ceded too much control, and you will likely pay for it. Without this, there will always need to be a human in whatever loop the AI is involved with.
When I say trust in AI requires accountability, I mean the system needs a “what happens when this goes wrong” story. Trust can only faithfully exist when failure has a defined, enforceable “what happens next,” and you can actually trigger it.
You can have accountability on paper and still have an AI system that nobody should rely on because it is opaque, unreliable, or impossible to control in the moment. Trust needs accountability, but it also requires competence and controllability, because you don’t want to discover the truth only after damage happens. And we’ve already read that story too many times with early AI implementations.
Think about it like this. I trust a bank with my money because there are layers of consequences if something breaks: controls, audits, fraud processes, insurance, regulators, and a clear path to reverse bad transactions. The bank doesn’t “hold itself accountable” out of virtue. The system makes accountability unavoidable and provides a mechanism for recovery. AI enterprise systems need that same level of accountability.
A Different Level Of Trust: Relational Trust
What I talked about above was transactional trust. Can the system handle money, permissions, compliance, and records without surprises? This is where the glass-box engineering discussed in the article stands out for me. And I’m not in a position to evaluate if they have or haven’t. I’ve not seen a demo to understand how they accomplish this.
Enterprise software is inherently mediated.
NetSuite isn’t in the business of unmediated, embodied experiences. Their competitive advantage lies in the mediated space, so they are explicitly trying to make it more transparent, governable, and trustworthy. This actually reinforces my humaneering framework rather than contradicting it. NetSuite is saying: “If you’re going to operate in a mediated space, here’s how to do it responsibly.” My humaneering framework says: “Yes, and recognize that some things should only happen in unmediated spaces.”
So, that is step one in evaluating AI software. Can I trust it in mediated space? But this article also made me ask: What about relational trust? Where does that come into play at all in evaluating AI software? Perhaps. If I’m having the AI act as a surrogate for me in chat, on the phone, as my own deepfake for videos and social media posts, etc, I think I need to be able to address a few other questions.
How might we evaluate relational trust?
I’ll start with a few questions: Does the system help humans build trust with other humans, or does it replace it with synthetic warmth? Will the user know they are speaking with a machine? Will they care? Should they care? What should that experience feel like?
For me, this is where humaneering matters. AI will dominate mediated spaces, so humans need to win in unmediated moments. So some portion of your AI strategy should lead you to more user-desired human moments. Your tech stack should support an approach that creates opportunities for more unmediated moments where they matter. It should help you uncover and facilitate the ‘when’ and ‘why’ someone wants to interact with a real human, and make those pathways easier to follow.
And it is not a forgone conclusion that they will want to speak with a chatbot, by the way. Jesse Beaudoin shared a graphic on one of my LinkedIn posts. It paints an interesting picture. Jesse said, “Interestingly, I ran across some recent data from Invoca that shows phone calls going up and human meetings going up over the last two years. Also, even as AI has made leaps and bounds in the last two years, consumer desire is flat.”

The Humaneering Lenses: Presence, Attention, Meaning
A platform can be a great glass box and still harm, from a humaneering perspective, if it pushes people toward synthetic intimacy, or if it shifts the relationships away from humans.
- Presence: Does the product free humans to show up in the room? Is there even a room where users want you to show up?
- Attention: Does it reduce noise and sharpen judgment, or create dependence where people stop thinking and start following? Does this system make people more capable over time, or more dependent on the platform?
- Meaning: Does it tee up better conversations and better decisions, or does it manufacture “connection at scale” that feels like care but behaves like optimization?
Trust Is A Bet You Place In Uncertainty.
“Trust is the active engagement with the unknown. Trust is risky. It’s vulnerable. It’s a leap of faith…The more we trust, the farther we are able to venture.” – Esther Perel
This matters for AI because buying and using AI is a bet in that same sense. In a black-box AI world, you’re blindly betting on what the system will do next week, what the vendor will change next quarter, and how those changes will affect your business, money, compliance, client outcomes, and even human relationships. A “glass box” pitch tries to reduce the uncertainty of that bet by claiming to provide visibility, audit trails, and controls, so you rely less on faith and more on evidence. That does not remove uncertainty, but it makes the bet more informed. The trust leap becomes easier.
If AI is going to dominate mediated spaces, we’re going to want more glass-box approaches like the one NetSuite describes in its marketing. Of course, that means we need to push back on the claims, verify their authenticity, and test them. It’s mandatory, or we shouldn’t even try to venture very far with it.
Esther Perel is right. “The more we trust, the further we are able to venture.”

Add your voice...